Dec 06 09:00:30 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 06 09:00:30 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 06 09:00:30 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 09:00:30 localhost kernel: BIOS-provided physical RAM map:
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 06 09:00:30 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 06 09:00:30 localhost kernel: NX (Execute Disable) protection: active
Dec 06 09:00:30 localhost kernel: APIC: Static calls initialized
Dec 06 09:00:30 localhost kernel: SMBIOS 2.8 present.
Dec 06 09:00:30 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 06 09:00:30 localhost kernel: Hypervisor detected: KVM
Dec 06 09:00:30 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 06 09:00:30 localhost kernel: kvm-clock: using sched offset of 3180231435 cycles
Dec 06 09:00:30 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 06 09:00:30 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 06 09:00:30 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 06 09:00:30 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 06 09:00:30 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 06 09:00:30 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 06 09:00:30 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 06 09:00:30 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 06 09:00:30 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 06 09:00:30 localhost kernel: Using GB pages for direct mapping
Dec 06 09:00:30 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 06 09:00:30 localhost kernel: ACPI: Early table checksum verification disabled
Dec 06 09:00:30 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 06 09:00:30 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 09:00:30 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 09:00:30 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 09:00:30 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 06 09:00:30 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 09:00:30 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 09:00:30 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 06 09:00:30 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 06 09:00:30 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 06 09:00:30 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 06 09:00:30 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 06 09:00:30 localhost kernel: No NUMA configuration found
Dec 06 09:00:30 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 06 09:00:30 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 06 09:00:30 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 06 09:00:30 localhost kernel: Zone ranges:
Dec 06 09:00:30 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 06 09:00:30 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 06 09:00:30 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 09:00:30 localhost kernel:   Device   empty
Dec 06 09:00:30 localhost kernel: Movable zone start for each node
Dec 06 09:00:30 localhost kernel: Early memory node ranges
Dec 06 09:00:30 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 06 09:00:30 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 06 09:00:30 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 09:00:30 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 06 09:00:30 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 06 09:00:30 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 06 09:00:30 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 06 09:00:30 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 06 09:00:30 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 06 09:00:30 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 06 09:00:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 06 09:00:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 06 09:00:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 06 09:00:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 06 09:00:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 06 09:00:30 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 06 09:00:30 localhost kernel: TSC deadline timer available
Dec 06 09:00:30 localhost kernel: CPU topo: Max. logical packages:   8
Dec 06 09:00:30 localhost kernel: CPU topo: Max. logical dies:       8
Dec 06 09:00:30 localhost kernel: CPU topo: Max. dies per package:   1
Dec 06 09:00:30 localhost kernel: CPU topo: Max. threads per core:   1
Dec 06 09:00:30 localhost kernel: CPU topo: Num. cores per package:     1
Dec 06 09:00:30 localhost kernel: CPU topo: Num. threads per package:   1
Dec 06 09:00:30 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 06 09:00:30 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 06 09:00:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 06 09:00:30 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 06 09:00:30 localhost kernel: Booting paravirtualized kernel on KVM
Dec 06 09:00:30 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 06 09:00:30 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 06 09:00:30 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 06 09:00:30 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 06 09:00:30 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 06 09:00:30 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 06 09:00:30 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 09:00:30 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 06 09:00:30 localhost kernel: random: crng init done
Dec 06 09:00:30 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 06 09:00:30 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 06 09:00:30 localhost kernel: Fallback order for Node 0: 0 
Dec 06 09:00:30 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 06 09:00:30 localhost kernel: Policy zone: Normal
Dec 06 09:00:30 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 06 09:00:30 localhost kernel: software IO TLB: area num 8.
Dec 06 09:00:30 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 06 09:00:30 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 06 09:00:30 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 06 09:00:30 localhost kernel: Dynamic Preempt: voluntary
Dec 06 09:00:30 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 06 09:00:30 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 06 09:00:30 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 06 09:00:30 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 06 09:00:30 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 06 09:00:30 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 06 09:00:30 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 06 09:00:30 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 06 09:00:30 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 09:00:30 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 09:00:30 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 09:00:30 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 06 09:00:30 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 06 09:00:30 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 06 09:00:30 localhost kernel: Console: colour VGA+ 80x25
Dec 06 09:00:30 localhost kernel: printk: console [ttyS0] enabled
Dec 06 09:00:30 localhost kernel: ACPI: Core revision 20230331
Dec 06 09:00:30 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 06 09:00:30 localhost kernel: x2apic enabled
Dec 06 09:00:30 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 06 09:00:30 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 06 09:00:30 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 06 09:00:30 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 06 09:00:30 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 06 09:00:30 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 06 09:00:30 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 06 09:00:30 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 06 09:00:30 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 06 09:00:30 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 06 09:00:30 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 06 09:00:30 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 06 09:00:30 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 06 09:00:30 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 06 09:00:30 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 06 09:00:30 localhost kernel: x86/bugs: return thunk changed
Dec 06 09:00:30 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 06 09:00:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 06 09:00:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 06 09:00:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 06 09:00:30 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 06 09:00:30 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 06 09:00:30 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 06 09:00:30 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 06 09:00:30 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 06 09:00:30 localhost kernel: landlock: Up and running.
Dec 06 09:00:30 localhost kernel: Yama: becoming mindful.
Dec 06 09:00:30 localhost kernel: SELinux:  Initializing.
Dec 06 09:00:30 localhost kernel: LSM support for eBPF active
Dec 06 09:00:30 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 09:00:30 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 09:00:30 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 06 09:00:30 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 06 09:00:30 localhost kernel: ... version:                0
Dec 06 09:00:30 localhost kernel: ... bit width:              48
Dec 06 09:00:30 localhost kernel: ... generic registers:      6
Dec 06 09:00:30 localhost kernel: ... value mask:             0000ffffffffffff
Dec 06 09:00:30 localhost kernel: ... max period:             00007fffffffffff
Dec 06 09:00:30 localhost kernel: ... fixed-purpose events:   0
Dec 06 09:00:30 localhost kernel: ... event mask:             000000000000003f
Dec 06 09:00:30 localhost kernel: signal: max sigframe size: 1776
Dec 06 09:00:30 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 06 09:00:30 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 06 09:00:30 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 06 09:00:30 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 06 09:00:30 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 06 09:00:30 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 06 09:00:30 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 06 09:00:30 localhost kernel: node 0 deferred pages initialised in 11ms
Dec 06 09:00:30 localhost kernel: Memory: 7764172K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 06 09:00:30 localhost kernel: devtmpfs: initialized
Dec 06 09:00:30 localhost kernel: x86/mm: Memory block size: 128MB
Dec 06 09:00:30 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 06 09:00:30 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 06 09:00:30 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 06 09:00:30 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 06 09:00:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 06 09:00:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 06 09:00:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 06 09:00:30 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 06 09:00:30 localhost kernel: audit: type=2000 audit(1765011629.353:1): state=initialized audit_enabled=0 res=1
Dec 06 09:00:30 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 06 09:00:30 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 06 09:00:30 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 06 09:00:30 localhost kernel: cpuidle: using governor menu
Dec 06 09:00:30 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 06 09:00:30 localhost kernel: PCI: Using configuration type 1 for base access
Dec 06 09:00:30 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 06 09:00:30 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 06 09:00:30 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 06 09:00:30 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 06 09:00:30 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 06 09:00:30 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 06 09:00:30 localhost kernel: Demotion targets for Node 0: null
Dec 06 09:00:30 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 06 09:00:30 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 06 09:00:30 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 06 09:00:30 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 06 09:00:30 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 06 09:00:30 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 06 09:00:30 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 06 09:00:30 localhost kernel: ACPI: Interpreter enabled
Dec 06 09:00:30 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 06 09:00:30 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 06 09:00:30 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 06 09:00:30 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 06 09:00:30 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 06 09:00:30 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 06 09:00:30 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [3] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [4] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [5] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [6] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [7] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [8] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [9] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [10] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [11] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [12] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [13] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [14] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [15] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [16] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [17] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [18] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [19] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [20] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [21] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [22] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [23] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [24] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [25] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [26] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [27] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [28] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [29] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [30] registered
Dec 06 09:00:30 localhost kernel: acpiphp: Slot [31] registered
Dec 06 09:00:30 localhost kernel: PCI host bridge to bus 0000:00
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 06 09:00:30 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 06 09:00:30 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 06 09:00:30 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 09:00:30 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 06 09:00:30 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 06 09:00:30 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 06 09:00:30 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 06 09:00:30 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 06 09:00:30 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 06 09:00:30 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 06 09:00:30 localhost kernel: iommu: Default domain type: Translated
Dec 06 09:00:30 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 06 09:00:30 localhost kernel: SCSI subsystem initialized
Dec 06 09:00:30 localhost kernel: ACPI: bus type USB registered
Dec 06 09:00:30 localhost kernel: usbcore: registered new interface driver usbfs
Dec 06 09:00:30 localhost kernel: usbcore: registered new interface driver hub
Dec 06 09:00:30 localhost kernel: usbcore: registered new device driver usb
Dec 06 09:00:30 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 06 09:00:30 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 06 09:00:30 localhost kernel: PTP clock support registered
Dec 06 09:00:30 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 06 09:00:30 localhost kernel: NetLabel: Initializing
Dec 06 09:00:30 localhost kernel: NetLabel:  domain hash size = 128
Dec 06 09:00:30 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 06 09:00:30 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 06 09:00:30 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 06 09:00:30 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 06 09:00:30 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 06 09:00:30 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 06 09:00:30 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 06 09:00:30 localhost kernel: vgaarb: loaded
Dec 06 09:00:30 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 06 09:00:30 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 06 09:00:30 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 06 09:00:30 localhost kernel: pnp: PnP ACPI init
Dec 06 09:00:30 localhost kernel: pnp 00:03: [dma 2]
Dec 06 09:00:30 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 06 09:00:30 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 06 09:00:30 localhost kernel: NET: Registered PF_INET protocol family
Dec 06 09:00:30 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 06 09:00:30 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 06 09:00:30 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 06 09:00:30 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 06 09:00:30 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 06 09:00:30 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 06 09:00:30 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 06 09:00:30 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 09:00:30 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 09:00:30 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 06 09:00:30 localhost kernel: NET: Registered PF_XDP protocol family
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 06 09:00:30 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 06 09:00:30 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 06 09:00:30 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 06 09:00:30 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80632 usecs
Dec 06 09:00:30 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 06 09:00:30 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 06 09:00:30 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 06 09:00:30 localhost kernel: ACPI: bus type thunderbolt registered
Dec 06 09:00:30 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 06 09:00:30 localhost kernel: Initialise system trusted keyrings
Dec 06 09:00:30 localhost kernel: Key type blacklist registered
Dec 06 09:00:30 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 06 09:00:30 localhost kernel: zbud: loaded
Dec 06 09:00:30 localhost kernel: integrity: Platform Keyring initialized
Dec 06 09:00:30 localhost kernel: integrity: Machine keyring initialized
Dec 06 09:00:30 localhost kernel: Freeing initrd memory: 87804K
Dec 06 09:00:30 localhost kernel: NET: Registered PF_ALG protocol family
Dec 06 09:00:30 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 06 09:00:30 localhost kernel: Key type asymmetric registered
Dec 06 09:00:30 localhost kernel: Asymmetric key parser 'x509' registered
Dec 06 09:00:30 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 06 09:00:30 localhost kernel: io scheduler mq-deadline registered
Dec 06 09:00:30 localhost kernel: io scheduler kyber registered
Dec 06 09:00:30 localhost kernel: io scheduler bfq registered
Dec 06 09:00:30 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 06 09:00:30 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 06 09:00:30 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 06 09:00:30 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 06 09:00:30 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 06 09:00:30 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 06 09:00:30 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 06 09:00:30 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 06 09:00:30 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 06 09:00:30 localhost kernel: Non-volatile memory driver v1.3
Dec 06 09:00:30 localhost kernel: rdac: device handler registered
Dec 06 09:00:30 localhost kernel: hp_sw: device handler registered
Dec 06 09:00:30 localhost kernel: emc: device handler registered
Dec 06 09:00:30 localhost kernel: alua: device handler registered
Dec 06 09:00:30 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 06 09:00:30 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 06 09:00:30 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 06 09:00:30 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 06 09:00:30 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 06 09:00:30 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 06 09:00:30 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 06 09:00:30 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 06 09:00:30 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 06 09:00:30 localhost kernel: hub 1-0:1.0: USB hub found
Dec 06 09:00:30 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 06 09:00:30 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 06 09:00:30 localhost kernel: usbserial: USB Serial support registered for generic
Dec 06 09:00:30 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 06 09:00:30 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 06 09:00:30 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 06 09:00:30 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 06 09:00:30 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 06 09:00:30 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 06 09:00:30 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 06 09:00:30 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T09:00:29 UTC (1765011629)
Dec 06 09:00:30 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 06 09:00:30 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 06 09:00:30 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 06 09:00:30 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 06 09:00:30 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 06 09:00:30 localhost kernel: usbcore: registered new interface driver usbhid
Dec 06 09:00:30 localhost kernel: usbhid: USB HID core driver
Dec 06 09:00:30 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 06 09:00:30 localhost kernel: Initializing XFRM netlink socket
Dec 06 09:00:30 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 06 09:00:30 localhost kernel: Segment Routing with IPv6
Dec 06 09:00:30 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 06 09:00:30 localhost kernel: mpls_gso: MPLS GSO support
Dec 06 09:00:30 localhost kernel: IPI shorthand broadcast: enabled
Dec 06 09:00:30 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 06 09:00:30 localhost kernel: AES CTR mode by8 optimization enabled
Dec 06 09:00:30 localhost kernel: sched_clock: Marking stable (1238005940, 153442775)->(1509443639, -117994924)
Dec 06 09:00:30 localhost kernel: registered taskstats version 1
Dec 06 09:00:30 localhost kernel: Loading compiled-in X.509 certificates
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 06 09:00:30 localhost kernel: Demotion targets for Node 0: null
Dec 06 09:00:30 localhost kernel: page_owner is disabled
Dec 06 09:00:30 localhost kernel: Key type .fscrypt registered
Dec 06 09:00:30 localhost kernel: Key type fscrypt-provisioning registered
Dec 06 09:00:30 localhost kernel: Key type big_key registered
Dec 06 09:00:30 localhost kernel: Key type encrypted registered
Dec 06 09:00:30 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 06 09:00:30 localhost kernel: Loading compiled-in module X.509 certificates
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 09:00:30 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 06 09:00:30 localhost kernel: ima: No architecture policies found
Dec 06 09:00:30 localhost kernel: evm: Initialising EVM extended attributes:
Dec 06 09:00:30 localhost kernel: evm: security.selinux
Dec 06 09:00:30 localhost kernel: evm: security.SMACK64 (disabled)
Dec 06 09:00:30 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 06 09:00:30 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 06 09:00:30 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 06 09:00:30 localhost kernel: evm: security.apparmor (disabled)
Dec 06 09:00:30 localhost kernel: evm: security.ima
Dec 06 09:00:30 localhost kernel: evm: security.capability
Dec 06 09:00:30 localhost kernel: evm: HMAC attrs: 0x1
Dec 06 09:00:30 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 06 09:00:30 localhost kernel: Running certificate verification RSA selftest
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 06 09:00:30 localhost kernel: Running certificate verification ECDSA selftest
Dec 06 09:00:30 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 06 09:00:30 localhost kernel: clk: Disabling unused clocks
Dec 06 09:00:30 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 06 09:00:30 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 06 09:00:30 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 06 09:00:30 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 06 09:00:30 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 06 09:00:30 localhost kernel: Run /init as init process
Dec 06 09:00:30 localhost kernel:   with arguments:
Dec 06 09:00:30 localhost kernel:     /init
Dec 06 09:00:30 localhost kernel:   with environment:
Dec 06 09:00:30 localhost kernel:     HOME=/
Dec 06 09:00:30 localhost kernel:     TERM=linux
Dec 06 09:00:30 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 06 09:00:30 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 06 09:00:30 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 06 09:00:30 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 06 09:00:30 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 06 09:00:30 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 06 09:00:30 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 06 09:00:30 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 06 09:00:30 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 09:00:30 localhost systemd[1]: Detected virtualization kvm.
Dec 06 09:00:30 localhost systemd[1]: Detected architecture x86-64.
Dec 06 09:00:30 localhost systemd[1]: Running in initrd.
Dec 06 09:00:30 localhost systemd[1]: No hostname configured, using default hostname.
Dec 06 09:00:30 localhost systemd[1]: Hostname set to <localhost>.
Dec 06 09:00:30 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 06 09:00:30 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 06 09:00:30 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 09:00:30 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 09:00:30 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 06 09:00:30 localhost systemd[1]: Reached target Local File Systems.
Dec 06 09:00:30 localhost systemd[1]: Reached target Path Units.
Dec 06 09:00:30 localhost systemd[1]: Reached target Slice Units.
Dec 06 09:00:30 localhost systemd[1]: Reached target Swaps.
Dec 06 09:00:30 localhost systemd[1]: Reached target Timer Units.
Dec 06 09:00:30 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 09:00:30 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 06 09:00:30 localhost systemd[1]: Listening on Journal Socket.
Dec 06 09:00:30 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 09:00:30 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 09:00:30 localhost systemd[1]: Reached target Socket Units.
Dec 06 09:00:30 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 09:00:30 localhost systemd[1]: Starting Journal Service...
Dec 06 09:00:30 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 06 09:00:30 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 09:00:30 localhost systemd[1]: Starting Create System Users...
Dec 06 09:00:30 localhost systemd[1]: Starting Setup Virtual Console...
Dec 06 09:00:30 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 09:00:30 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 09:00:30 localhost systemd[1]: Finished Create System Users.
Dec 06 09:00:30 localhost systemd-journald[304]: Journal started
Dec 06 09:00:30 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/cc5c2b35ce1b4acf99067bdc7897f14e) is 8.0M, max 153.6M, 145.6M free.
Dec 06 09:00:30 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Dec 06 09:00:30 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Dec 06 09:00:30 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 06 09:00:30 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 09:00:30 localhost systemd[1]: Started Journal Service.
Dec 06 09:00:30 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 09:00:30 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 09:00:30 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 09:00:30 localhost systemd[1]: Finished Setup Virtual Console.
Dec 06 09:00:30 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 06 09:00:30 localhost systemd[1]: Starting dracut cmdline hook...
Dec 06 09:00:30 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec 06 09:00:30 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 09:00:30 localhost systemd[1]: Finished dracut cmdline hook.
Dec 06 09:00:30 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 06 09:00:30 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 06 09:00:30 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 06 09:00:30 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 06 09:00:30 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 06 09:00:30 localhost kernel: RPC: Registered udp transport module.
Dec 06 09:00:30 localhost kernel: RPC: Registered tcp transport module.
Dec 06 09:00:30 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 06 09:00:30 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 06 09:00:30 localhost rpc.statd[440]: Version 2.5.4 starting
Dec 06 09:00:30 localhost rpc.statd[440]: Initializing NSM state
Dec 06 09:00:30 localhost rpc.idmapd[445]: Setting log level to 0
Dec 06 09:00:30 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 06 09:00:30 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 09:00:30 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 09:00:30 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 09:00:30 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 06 09:00:30 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 06 09:00:30 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 09:00:30 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 06 09:00:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 09:00:30 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 09:00:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 09:00:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 09:00:30 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 09:00:30 localhost systemd[1]: Reached target Network.
Dec 06 09:00:30 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 09:00:30 localhost systemd[1]: Starting dracut initqueue hook...
Dec 06 09:00:30 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 06 09:00:30 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 06 09:00:31 localhost kernel:  vda: vda1
Dec 06 09:00:31 localhost kernel: libata version 3.00 loaded.
Dec 06 09:00:31 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 06 09:00:31 localhost kernel: scsi host0: ata_piix
Dec 06 09:00:31 localhost kernel: scsi host1: ata_piix
Dec 06 09:00:31 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 06 09:00:31 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 06 09:00:31 localhost systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:00:31 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 09:00:31 localhost systemd[1]: Reached target Initrd Root Device.
Dec 06 09:00:31 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 06 09:00:31 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 06 09:00:31 localhost kernel: ata1: found unknown device (class 0)
Dec 06 09:00:31 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 06 09:00:31 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 06 09:00:31 localhost systemd[1]: Reached target System Initialization.
Dec 06 09:00:31 localhost systemd[1]: Reached target Basic System.
Dec 06 09:00:31 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 06 09:00:31 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 06 09:00:31 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 06 09:00:31 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 06 09:00:31 localhost systemd[1]: Finished dracut initqueue hook.
Dec 06 09:00:31 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 09:00:31 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 06 09:00:31 localhost systemd[1]: Reached target Remote File Systems.
Dec 06 09:00:31 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 06 09:00:31 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 06 09:00:31 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 06 09:00:31 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec 06 09:00:31 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 09:00:31 localhost systemd[1]: Mounting /sysroot...
Dec 06 09:00:31 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 06 09:00:31 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 06 09:00:31 localhost kernel: XFS (vda1): Ending clean mount
Dec 06 09:00:31 localhost systemd[1]: Mounted /sysroot.
Dec 06 09:00:31 localhost systemd[1]: Reached target Initrd Root File System.
Dec 06 09:00:31 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 06 09:00:31 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 06 09:00:31 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 06 09:00:31 localhost systemd[1]: Reached target Initrd File Systems.
Dec 06 09:00:31 localhost systemd[1]: Reached target Initrd Default Target.
Dec 06 09:00:31 localhost systemd[1]: Starting dracut mount hook...
Dec 06 09:00:31 localhost systemd[1]: Finished dracut mount hook.
Dec 06 09:00:31 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 06 09:00:32 localhost rpc.idmapd[445]: exiting on signal 15
Dec 06 09:00:32 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 06 09:00:32 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 06 09:00:32 localhost systemd[1]: Stopped target Network.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Timer Units.
Dec 06 09:00:32 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 06 09:00:32 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Basic System.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Path Units.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Remote File Systems.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Slice Units.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Socket Units.
Dec 06 09:00:32 localhost systemd[1]: Stopped target System Initialization.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Local File Systems.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Swaps.
Dec 06 09:00:32 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut mount hook.
Dec 06 09:00:32 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 06 09:00:32 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 06 09:00:32 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 06 09:00:32 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 06 09:00:32 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 06 09:00:32 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 06 09:00:32 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 06 09:00:32 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 06 09:00:32 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 06 09:00:32 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 06 09:00:32 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 06 09:00:32 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Closed udev Control Socket.
Dec 06 09:00:32 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Closed udev Kernel Socket.
Dec 06 09:00:32 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 06 09:00:32 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 06 09:00:32 localhost systemd[1]: Starting Cleanup udev Database...
Dec 06 09:00:32 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 06 09:00:32 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 06 09:00:32 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Create System Users.
Dec 06 09:00:32 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished Cleanup udev Database.
Dec 06 09:00:32 localhost systemd[1]: Reached target Switch Root.
Dec 06 09:00:32 localhost systemd[1]: Starting Switch Root...
Dec 06 09:00:32 localhost systemd[1]: Switching root.
Dec 06 09:00:32 localhost systemd-journald[304]: Journal stopped
Dec 06 09:00:32 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Dec 06 09:00:32 localhost kernel: audit: type=1404 audit(1765011632.309:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability open_perms=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:00:32 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:00:32 localhost kernel: audit: type=1403 audit(1765011632.452:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 06 09:00:32 localhost systemd[1]: Successfully loaded SELinux policy in 146.270ms.
Dec 06 09:00:32 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.741ms.
Dec 06 09:00:32 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 09:00:32 localhost systemd[1]: Detected virtualization kvm.
Dec 06 09:00:32 localhost systemd[1]: Detected architecture x86-64.
Dec 06 09:00:32 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:00:32 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped Switch Root.
Dec 06 09:00:32 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 06 09:00:32 localhost systemd[1]: Created slice Slice /system/getty.
Dec 06 09:00:32 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 06 09:00:32 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 06 09:00:32 localhost systemd[1]: Created slice User and Session Slice.
Dec 06 09:00:32 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 09:00:32 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 06 09:00:32 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 06 09:00:32 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Switch Root.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 06 09:00:32 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 06 09:00:32 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 06 09:00:32 localhost systemd[1]: Reached target Path Units.
Dec 06 09:00:32 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 06 09:00:32 localhost systemd[1]: Reached target Slice Units.
Dec 06 09:00:32 localhost systemd[1]: Reached target Swaps.
Dec 06 09:00:32 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 06 09:00:32 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 06 09:00:32 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 06 09:00:32 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 06 09:00:32 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 06 09:00:32 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 09:00:32 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 09:00:32 localhost systemd[1]: Mounting Huge Pages File System...
Dec 06 09:00:32 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 06 09:00:32 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 06 09:00:32 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 06 09:00:32 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 09:00:32 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 09:00:32 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 09:00:32 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 06 09:00:32 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 06 09:00:32 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 06 09:00:32 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 06 09:00:32 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 06 09:00:32 localhost systemd[1]: Stopped Journal Service.
Dec 06 09:00:32 localhost systemd[1]: Starting Journal Service...
Dec 06 09:00:32 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 06 09:00:32 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 06 09:00:32 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 09:00:32 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 06 09:00:32 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 06 09:00:32 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 09:00:32 localhost kernel: fuse: init (API version 7.37)
Dec 06 09:00:32 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 09:00:32 localhost systemd-journald[678]: Journal started
Dec 06 09:00:32 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 06 09:00:32 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 06 09:00:32 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Started Journal Service.
Dec 06 09:00:32 localhost systemd[1]: Mounted Huge Pages File System.
Dec 06 09:00:32 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 06 09:00:32 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 06 09:00:32 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 06 09:00:32 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 06 09:00:32 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 09:00:32 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 09:00:32 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 06 09:00:32 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 06 09:00:32 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 06 09:00:32 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 06 09:00:32 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 06 09:00:32 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 09:00:32 localhost kernel: ACPI: bus type drm_connector registered
Dec 06 09:00:32 localhost systemd[1]: Mounting FUSE Control File System...
Dec 06 09:00:32 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 09:00:32 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 06 09:00:32 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 06 09:00:32 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 06 09:00:32 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 06 09:00:32 localhost systemd[1]: Starting Create System Users...
Dec 06 09:00:32 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 06 09:00:32 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 06 09:00:32 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 06 09:00:32 localhost systemd-journald[678]: Received client request to flush runtime journal.
Dec 06 09:00:32 localhost systemd[1]: Mounted FUSE Control File System.
Dec 06 09:00:32 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 06 09:00:32 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 06 09:00:32 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 09:00:32 localhost systemd[1]: Finished Create System Users.
Dec 06 09:00:32 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 09:00:32 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 09:00:33 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 09:00:33 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 06 09:00:33 localhost systemd[1]: Reached target Local File Systems.
Dec 06 09:00:33 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 06 09:00:33 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 06 09:00:33 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 06 09:00:33 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 06 09:00:33 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 06 09:00:33 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 06 09:00:33 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 09:00:33 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Dec 06 09:00:33 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 06 09:00:33 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 06 09:00:33 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 09:00:33 localhost systemd[1]: Starting Security Auditing Service...
Dec 06 09:00:33 localhost systemd[1]: Starting RPC Bind...
Dec 06 09:00:33 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 06 09:00:33 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 06 09:00:33 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 06 09:00:33 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 06 09:00:33 localhost systemd[1]: Started RPC Bind.
Dec 06 09:00:33 localhost augenrules[706]: /sbin/augenrules: No change
Dec 06 09:00:33 localhost augenrules[721]: No rules
Dec 06 09:00:33 localhost augenrules[721]: enabled 1
Dec 06 09:00:33 localhost augenrules[721]: failure 1
Dec 06 09:00:33 localhost augenrules[721]: pid 701
Dec 06 09:00:33 localhost augenrules[721]: rate_limit 0
Dec 06 09:00:33 localhost augenrules[721]: backlog_limit 8192
Dec 06 09:00:33 localhost augenrules[721]: lost 0
Dec 06 09:00:33 localhost augenrules[721]: backlog 3
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time 60000
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 06 09:00:33 localhost augenrules[721]: enabled 1
Dec 06 09:00:33 localhost augenrules[721]: failure 1
Dec 06 09:00:33 localhost augenrules[721]: pid 701
Dec 06 09:00:33 localhost augenrules[721]: rate_limit 0
Dec 06 09:00:33 localhost augenrules[721]: backlog_limit 8192
Dec 06 09:00:33 localhost augenrules[721]: lost 0
Dec 06 09:00:33 localhost augenrules[721]: backlog 0
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time 60000
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 06 09:00:33 localhost augenrules[721]: enabled 1
Dec 06 09:00:33 localhost augenrules[721]: failure 1
Dec 06 09:00:33 localhost augenrules[721]: pid 701
Dec 06 09:00:33 localhost augenrules[721]: rate_limit 0
Dec 06 09:00:33 localhost augenrules[721]: backlog_limit 8192
Dec 06 09:00:33 localhost augenrules[721]: lost 0
Dec 06 09:00:33 localhost augenrules[721]: backlog 3
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time 60000
Dec 06 09:00:33 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 06 09:00:33 localhost systemd[1]: Started Security Auditing Service.
Dec 06 09:00:33 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 06 09:00:33 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 06 09:00:33 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 06 09:00:33 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 09:00:33 localhost systemd[1]: Starting Update is Completed...
Dec 06 09:00:33 localhost systemd[1]: Finished Update is Completed.
Dec 06 09:00:33 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 09:00:33 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 09:00:33 localhost systemd[1]: Reached target System Initialization.
Dec 06 09:00:33 localhost systemd[1]: Started dnf makecache --timer.
Dec 06 09:00:33 localhost systemd[1]: Started Daily rotation of log files.
Dec 06 09:00:33 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 06 09:00:33 localhost systemd[1]: Reached target Timer Units.
Dec 06 09:00:33 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 09:00:33 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 06 09:00:33 localhost systemd[1]: Reached target Socket Units.
Dec 06 09:00:33 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 06 09:00:33 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 09:00:33 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:00:33 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 06 09:00:33 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 09:00:33 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 09:00:33 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 09:00:33 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 06 09:00:33 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 06 09:00:33 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 06 09:00:33 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 06 09:00:33 localhost systemd[1]: Reached target Basic System.
Dec 06 09:00:33 localhost dbus-broker-lau[767]: Ready
Dec 06 09:00:33 localhost systemd[1]: Starting NTP client/server...
Dec 06 09:00:33 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 06 09:00:33 localhost chronyd[778]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 09:00:33 localhost chronyd[778]: Loaded 0 symmetric keys
Dec 06 09:00:33 localhost chronyd[778]: Using right/UTC timezone to obtain leap second data
Dec 06 09:00:33 localhost chronyd[778]: Loaded seccomp filter (level 2)
Dec 06 09:00:33 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 06 09:00:33 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 06 09:00:33 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 06 09:00:33 localhost systemd[1]: Started irqbalance daemon.
Dec 06 09:00:33 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 06 09:00:33 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:00:33 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:00:33 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:00:33 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 06 09:00:33 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 06 09:00:33 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 06 09:00:33 localhost systemd[1]: Starting User Login Management...
Dec 06 09:00:34 localhost kernel: kvm_amd: TSC scaling supported
Dec 06 09:00:34 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 06 09:00:34 localhost kernel: kvm_amd: Nested Paging enabled
Dec 06 09:00:34 localhost kernel: kvm_amd: LBR virtualization supported
Dec 06 09:00:34 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 09:00:34 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 09:00:34 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 06 09:00:34 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 06 09:00:34 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 06 09:00:34 localhost kernel: Console: switching to colour dummy device 80x25
Dec 06 09:00:34 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 06 09:00:34 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 06 09:00:34 localhost kernel: [drm] features: -context_init
Dec 06 09:00:34 localhost kernel: [drm] number of scanouts: 1
Dec 06 09:00:34 localhost kernel: [drm] number of cap sets: 0
Dec 06 09:00:34 localhost systemd-logind[795]: New seat seat0.
Dec 06 09:00:34 localhost systemd[1]: Started User Login Management.
Dec 06 09:00:34 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 06 09:00:34 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 06 09:00:34 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 06 09:00:34 localhost systemd[1]: Started NTP client/server.
Dec 06 09:00:34 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 06 09:00:34 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 06 09:00:34 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec 06 09:00:34 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 06 09:00:34 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 06 Dec 2025 09:00:34 +0000. Up 6.00 seconds.
Dec 06 09:00:34 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 06 09:00:34 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 06 09:00:34 localhost systemd[1]: run-cloud\x2dinit-tmp-tmplhnuvwvm.mount: Deactivated successfully.
Dec 06 09:00:34 localhost systemd[1]: Starting Hostname Service...
Dec 06 09:00:34 localhost systemd[1]: Started Hostname Service.
Dec 06 09:00:34 np0005548915.novalocal systemd-hostnamed[851]: Hostname set to <np0005548915.novalocal> (static)
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Reached target Preparation for Network.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Starting Network Manager...
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8246] NetworkManager (version 1.54.1-1.el9) is starting... (boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8250] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8314] manager[0x5601b98d4080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8352] hostname: hostname: using hostnamed
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8353] hostname: static hostname changed from (none) to "np0005548915.novalocal"
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8357] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8447] manager[0x5601b98d4080]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8450] manager[0x5601b98d4080]: rfkill: WWAN hardware radio set enabled
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8491] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8493] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8493] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8494] manager: Networking is enabled by state file
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8496] settings: Loaded settings plugin: keyfile (internal)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8507] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8524] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8536] dhcp: init: Using DHCP client 'internal'
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8539] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8552] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8563] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8571] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8579] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8582] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8613] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8616] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8618] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8619] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8621] device (eth0): carrier: link connected
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8622] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8627] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8633] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8636] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8638] manager: NetworkManager state is now CONNECTING
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8639] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8643] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8645] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Started Network Manager.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Reached target Network.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8949] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8953] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 09:00:34 np0005548915.novalocal NetworkManager[855]: <info>  [1765011634.8961] device (lo): Activation: successful, device activated.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Reached target NFS client services.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: Reached target Remote File Systems.
Dec 06 09:00:34 np0005548915.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9142] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9156] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9175] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9210] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9212] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9214] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9217] device (eth0): Activation: successful, device activated.
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9220] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 09:00:35 np0005548915.novalocal NetworkManager[855]: <info>  [1765011635.9222] manager: startup complete
Dec 06 09:00:35 np0005548915.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 09:00:35 np0005548915.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 06 Dec 2025 09:00:36 +0000. Up 7.95 seconds.
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |  eth0  | True |         38.102.83.27         | 255.255.255.0 | global | fa:16:3e:87:1e:0a |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe87:1e0a/64 |       .       |  link  | fa:16:3e:87:1e:0a |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec 06 09:00:36 np0005548915.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Dec 06 09:00:37 np0005548915.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Generating public/private rsa key pair.
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key fingerprint is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: SHA256:4HsYvYZbsIZrlt259ZE1uctiG5AJnDk0WaQERd6R7L8 root@np0005548915.novalocal
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key's randomart image is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +---[RSA 3072]----+
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |      .+*=+.     |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |       =.*o.     |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |      . O..      |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |     . o o.o   . |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |      + S +.  +  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |     . B . ..o o |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |    .o*.=.. +..  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |    +o.=o. .E+ . |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |   o. . .. .ooo  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Generating public/private ecdsa key pair.
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key fingerprint is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: SHA256:drQfY72BKLu6JfujwiJ6Z3btUKPajzO/8+eGKz1qKbY root@np0005548915.novalocal
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key's randomart image is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +---[ECDSA 256]---+
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |                 |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |                 |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |          .      |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |         . o o   |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |        S + = o  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |       + = o o o |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |   .  +.oo .. .  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |. o *o**B.+ o    |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |oo =.+E#OBo*.    |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Generating public/private ed25519 key pair.
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key fingerprint is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: SHA256:87UdsGwqdSjB54ua/WT855Du5izy3dQjhLewGPTXI4k root@np0005548915.novalocal
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: The key's randomart image is:
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +--[ED25519 256]--+
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |                 |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |       .         |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |        o.. .    |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |        .+.oooo  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |        S.+E**.o |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |         Bo**+oo.|
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |        o.B.+oo..|
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |       +.+.=.+o .|
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: |      o .+o=Boo  |
Dec 06 09:00:37 np0005548915.novalocal cloud-init[919]: +----[SHA256]-----+
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Reached target Network is Online.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting System Logging Service...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Permit User Sessions...
Dec 06 09:00:37 np0005548915.novalocal sm-notify[1003]: Version 2.5.4 starting
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Finished Permit User Sessions.
Dec 06 09:00:37 np0005548915.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 06 09:00:37 np0005548915.novalocal sshd[1005]: Server listening on :: port 22.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started Command Scheduler.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started Getty on tty1.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 06 09:00:37 np0005548915.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Dec 06 09:00:37 np0005548915.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Reached target Login Prompts.
Dec 06 09:00:37 np0005548915.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 20% if used.)
Dec 06 09:00:37 np0005548915.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Dec 06 09:00:37 np0005548915.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec 06 09:00:37 np0005548915.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Started System Logging Service.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Reached target Multi-User System.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 06 09:00:37 np0005548915.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:00:37 np0005548915.novalocal kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Dec 06 09:00:37 np0005548915.novalocal kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 06 09:00:37 np0005548915.novalocal cloud-init[1136]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 06 Dec 2025 09:00:37 +0000. Up 9.43 seconds.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 06 09:00:37 np0005548915.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 06 09:00:38 np0005548915.novalocal dracut[1264]: dracut-057-102.git20250818.el9
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1298]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 06 Dec 2025 09:00:38 +0000. Up 9.86 seconds.
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1327]: #############################################################
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1330]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1337]: 256 SHA256:drQfY72BKLu6JfujwiJ6Z3btUKPajzO/8+eGKz1qKbY root@np0005548915.novalocal (ECDSA)
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1341]: 256 SHA256:87UdsGwqdSjB54ua/WT855Du5izy3dQjhLewGPTXI4k root@np0005548915.novalocal (ED25519)
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1343]: 3072 SHA256:4HsYvYZbsIZrlt259ZE1uctiG5AJnDk0WaQERd6R7L8 root@np0005548915.novalocal (RSA)
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1344]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1345]: #############################################################
Dec 06 09:00:38 np0005548915.novalocal cloud-init[1298]: Cloud-init v. 24.4-7.el9 finished at Sat, 06 Dec 2025 09:00:38 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.04 seconds
Dec 06 09:00:38 np0005548915.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 06 09:00:38 np0005548915.novalocal systemd[1]: Reached target Cloud-init target.
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 09:00:38 np0005548915.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: memstrack is not available
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: memstrack is not available
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1827]: Connection reset by 38.102.83.114 port 53170 [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1841]: Unable to negotiate with 38.102.83.114 port 50572: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1859]: Unable to negotiate with 38.102.83.114 port 50588: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1872]: Unable to negotiate with 38.102.83.114 port 50590: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: *** Including module: systemd ***
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1887]: Connection reset by 38.102.83.114 port 50602 [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1896]: Unable to negotiate with 38.102.83.114 port 50616: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1849]: Connection closed by 38.102.83.114 port 50580 [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1908]: Unable to negotiate with 38.102.83.114 port 50620: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 06 09:00:39 np0005548915.novalocal sshd-session[1881]: Connection closed by 38.102.83.114 port 50592 [preauth]
Dec 06 09:00:39 np0005548915.novalocal dracut[1266]: *** Including module: fips ***
Dec 06 09:00:40 np0005548915.novalocal dracut[1266]: *** Including module: systemd-initrd ***
Dec 06 09:00:40 np0005548915.novalocal dracut[1266]: *** Including module: i18n ***
Dec 06 09:00:40 np0005548915.novalocal dracut[1266]: *** Including module: drm ***
Dec 06 09:00:40 np0005548915.novalocal dracut[1266]: *** Including module: prefixdevname ***
Dec 06 09:00:40 np0005548915.novalocal dracut[1266]: *** Including module: kernel-modules ***
Dec 06 09:00:40 np0005548915.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 06 09:00:40 np0005548915.novalocal chronyd[778]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Dec 06 09:00:40 np0005548915.novalocal chronyd[778]: System clock wrong by 1.153678 seconds
Dec 06 09:00:41 np0005548915.novalocal chronyd[778]: System clock was stepped by 1.153678 seconds
Dec 06 09:00:41 np0005548915.novalocal chronyd[778]: System clock TAI offset set to 37 seconds
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: kernel-modules-extra ***
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: qemu ***
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: fstab-sys ***
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: rootfs-block ***
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: terminfo ***
Dec 06 09:00:42 np0005548915.novalocal dracut[1266]: *** Including module: udev-rules ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: Skipping udev rule: 91-permissions.rules
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: virtiofs ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: dracut-systemd ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: usrmount ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: base ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: fs-lib ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: kdumpbase ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:   microcode_ctl module: mangling fw_dir
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel" is ignored
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 06 09:00:43 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]: *** Including module: openssl ***
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]: *** Including module: shutdown ***
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]: *** Including module: squash ***
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]: *** Including modules done ***
Dec 06 09:00:44 np0005548915.novalocal dracut[1266]: *** Installing kernel module dependencies ***
Dec 06 09:00:45 np0005548915.novalocal dracut[1266]: *** Installing kernel module dependencies done ***
Dec 06 09:00:45 np0005548915.novalocal dracut[1266]: *** Resolving executable dependencies ***
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 25 affinity is now unmanaged
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 31 affinity is now unmanaged
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 28 affinity is now unmanaged
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 32 affinity is now unmanaged
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 30 affinity is now unmanaged
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 06 09:00:45 np0005548915.novalocal irqbalance[788]: IRQ 29 affinity is now unmanaged
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: *** Resolving executable dependencies done ***
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: *** Generating early-microcode cpio image ***
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: *** Store current command line parameters ***
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: Stored kernel commandline:
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Dec 06 09:00:46 np0005548915.novalocal dracut[1266]: *** Install squash loader ***
Dec 06 09:00:47 np0005548915.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:00:48 np0005548915.novalocal dracut[1266]: *** Squashing the files inside the initramfs ***
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: *** Squashing the files inside the initramfs done ***
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: *** Hardlinking files ***
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Mode:           real
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Files:          50
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Linked:         0 files
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Compared:       0 xattrs
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Compared:       0 files
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Saved:          0 B
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: Duration:       0.000422 seconds
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: *** Hardlinking files done ***
Dec 06 09:00:49 np0005548915.novalocal dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 06 09:00:50 np0005548915.novalocal kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Dec 06 09:00:50 np0005548915.novalocal kdumpctl[1017]: kdump: Starting kdump: [OK]
Dec 06 09:00:50 np0005548915.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 06 09:00:50 np0005548915.novalocal systemd[1]: Startup finished in 1.595s (kernel) + 2.408s (initrd) + 16.678s (userspace) = 20.682s.
Dec 06 09:00:54 np0005548915.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 52434 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 06 09:00:54 np0005548915.novalocal systemd-logind[795]: New session 1 of user zuul.
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Queued start job for default target Main User Target.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Created slice User Application Slice.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Reached target Paths.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Reached target Timers.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Reached target Sockets.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Reached target Basic System.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Reached target Main User Target.
Dec 06 09:00:54 np0005548915.novalocal systemd[4299]: Startup finished in 109ms.
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 06 09:00:54 np0005548915.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 06 09:00:54 np0005548915.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:00:55 np0005548915.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:00:57 np0005548915.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:01:02 np0005548915.novalocal CROND[4445]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 09:01:02 np0005548915.novalocal run-parts[4448]: (/etc/cron.hourly) starting 0anacron
Dec 06 09:01:02 np0005548915.novalocal anacron[4456]: Anacron started on 2025-12-06
Dec 06 09:01:02 np0005548915.novalocal anacron[4456]: Will run job `cron.daily' in 28 min.
Dec 06 09:01:02 np0005548915.novalocal anacron[4456]: Will run job `cron.weekly' in 48 min.
Dec 06 09:01:02 np0005548915.novalocal anacron[4456]: Will run job `cron.monthly' in 68 min.
Dec 06 09:01:02 np0005548915.novalocal anacron[4456]: Jobs will be executed sequentially
Dec 06 09:01:02 np0005548915.novalocal run-parts[4458]: (/etc/cron.hourly) finished 0anacron
Dec 06 09:01:02 np0005548915.novalocal CROND[4444]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 09:01:05 np0005548915.novalocal python3[4482]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:01:06 np0005548915.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 09:01:06 np0005548915.novalocal python3[4524]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 06 09:01:08 np0005548915.novalocal python3[4550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU0JPqo3RlcbkISWeWyZyh8N1DipPCXKbgbj83sLrBXd5pRLoLdbqBjiuLvFfP7lb5gET6+eP3VZiOMI6UHmEm8ynKQRTIQ7lxC6wlJ/5bEkQ7shEony5Dt8S+/YriKnW8SR/bfYJwGVDGiYwX9+YLTEkgtaWYCW5aOhF1JYR2fNVZQyTaBuiZFc/j1+ce31wCfSAIAFETx4TP71KVZET/mDhOPfYQSE6dNJCcZnohKVSa1SHNL0bVxbehOrQrmqmiRc81piGO4LAMvuSM3op7QTjc7lDDNoYX/DWm/O6Yd8IV5PAI5jAYm4zViXyj8K/iPfclSAUCutpd/HwsQjjiI9Ei0ObVrpLhV3PWw6UkMmfRl4sN90Bhg/95I6taoeEDSSNojukndyGr3lxM1SkEHO0ZamuvQmAOsP05x89hsZFP9E+RntviBPqrCNyyiE7JEy2H1WfIK5i0KA/BC8M+osytKOc1zBu/jI4TYPr32yUNd7mIBDzpNaUok32L4Pk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:09 np0005548915.novalocal python3[4574]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:09 np0005548915.novalocal python3[4673]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:09 np0005548915.novalocal python3[4744]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011669.3883896-251-259234522709630/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=66d341c321a043af9793d30ca9726f09_id_rsa follow=False checksum=1c48fa8bdbec038bf9f0f4b497dca115d790ad66 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:10 np0005548915.novalocal python3[4867]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:10 np0005548915.novalocal python3[4938]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011670.2597892-306-204397967249075/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=66d341c321a043af9793d30ca9726f09_id_rsa.pub follow=False checksum=e7cbe2647d02b25f8aa52dd3d3a0ea1aa1cad833 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:12 np0005548915.novalocal python3[4986]: ansible-ping Invoked with data=pong
Dec 06 09:01:13 np0005548915.novalocal python3[5010]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:01:15 np0005548915.novalocal python3[5068]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 06 09:01:16 np0005548915.novalocal python3[5100]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:16 np0005548915.novalocal python3[5124]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:17 np0005548915.novalocal python3[5148]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:17 np0005548915.novalocal python3[5172]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:17 np0005548915.novalocal python3[5196]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:18 np0005548915.novalocal python3[5220]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:19 np0005548915.novalocal sudo[5244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyeqcgsadppmkughvpocbhxqfjyousqj ; /usr/bin/python3'
Dec 06 09:01:19 np0005548915.novalocal sudo[5244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:19 np0005548915.novalocal python3[5246]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:19 np0005548915.novalocal sudo[5244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:20 np0005548915.novalocal sudo[5322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygobdtnzajwfarykvpatmrdhzjduzxce ; /usr/bin/python3'
Dec 06 09:01:20 np0005548915.novalocal sudo[5322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:20 np0005548915.novalocal python3[5324]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:20 np0005548915.novalocal sudo[5322]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:20 np0005548915.novalocal sudo[5395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqssjjilcpfanzzgccjgepummqvrhtiv ; /usr/bin/python3'
Dec 06 09:01:20 np0005548915.novalocal sudo[5395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:20 np0005548915.novalocal python3[5397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011680.0325034-31-121883593708449/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:20 np0005548915.novalocal sudo[5395]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:21 np0005548915.novalocal python3[5445]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:21 np0005548915.novalocal python3[5469]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:22 np0005548915.novalocal python3[5493]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:22 np0005548915.novalocal python3[5517]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:22 np0005548915.novalocal python3[5541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:22 np0005548915.novalocal python3[5565]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:23 np0005548915.novalocal python3[5589]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:23 np0005548915.novalocal python3[5613]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:23 np0005548915.novalocal python3[5637]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:23 np0005548915.novalocal python3[5661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:24 np0005548915.novalocal python3[5685]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:24 np0005548915.novalocal python3[5709]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:24 np0005548915.novalocal python3[5733]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:25 np0005548915.novalocal python3[5757]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:25 np0005548915.novalocal python3[5781]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:25 np0005548915.novalocal python3[5805]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:25 np0005548915.novalocal python3[5829]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:26 np0005548915.novalocal python3[5853]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:26 np0005548915.novalocal python3[5877]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:26 np0005548915.novalocal python3[5901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:26 np0005548915.novalocal python3[5925]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:27 np0005548915.novalocal python3[5949]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:27 np0005548915.novalocal python3[5973]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:27 np0005548915.novalocal python3[5997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:27 np0005548915.novalocal python3[6021]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:28 np0005548915.novalocal python3[6045]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:01:31 np0005548915.novalocal sudo[6069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfeomdaowzzaolwijcnqrxekamopcblr ; /usr/bin/python3'
Dec 06 09:01:31 np0005548915.novalocal sudo[6069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:31 np0005548915.novalocal python3[6071]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 09:01:31 np0005548915.novalocal systemd[1]: Starting Time & Date Service...
Dec 06 09:01:31 np0005548915.novalocal systemd[1]: Started Time & Date Service.
Dec 06 09:01:31 np0005548915.novalocal systemd-timedated[6073]: Changed time zone to 'UTC' (UTC).
Dec 06 09:01:31 np0005548915.novalocal sudo[6069]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:32 np0005548915.novalocal sudo[6100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggyyzqxdijgcnbjzklkcxwthlhjssonw ; /usr/bin/python3'
Dec 06 09:01:32 np0005548915.novalocal sudo[6100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:32 np0005548915.novalocal python3[6102]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:32 np0005548915.novalocal sudo[6100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:32 np0005548915.novalocal python3[6178]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:32 np0005548915.novalocal python3[6249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765011692.3971589-251-23918897575888/source _original_basename=tmphdeuo2bx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:33 np0005548915.novalocal python3[6349]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:33 np0005548915.novalocal python3[6420]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765011693.3040066-301-74495999743567/source _original_basename=tmpa9hbw3x4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:34 np0005548915.novalocal sudo[6520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhrvnloplwspomdrdaqugiwfvaadzzhr ; /usr/bin/python3'
Dec 06 09:01:34 np0005548915.novalocal sudo[6520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:34 np0005548915.novalocal python3[6522]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:34 np0005548915.novalocal sudo[6520]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:35 np0005548915.novalocal sudo[6593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbnoowgllrjapxqiecnxvhcfvnorsbd ; /usr/bin/python3'
Dec 06 09:01:35 np0005548915.novalocal sudo[6593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:35 np0005548915.novalocal python3[6595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765011694.6163456-381-82365141350240/source _original_basename=tmpkt__clge follow=False checksum=e37e58be433a53918a64d1ef12dfc1e7d01516d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:35 np0005548915.novalocal sudo[6593]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:35 np0005548915.novalocal python3[6643]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:01:36 np0005548915.novalocal python3[6669]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:01:36 np0005548915.novalocal sudo[6747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmlbiazhvipugbicledievanumxnfrw ; /usr/bin/python3'
Dec 06 09:01:36 np0005548915.novalocal sudo[6747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:36 np0005548915.novalocal python3[6749]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:01:36 np0005548915.novalocal sudo[6747]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:36 np0005548915.novalocal sudo[6820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahmtjwwvdqrnepdgqqfllvtczqrofkr ; /usr/bin/python3'
Dec 06 09:01:36 np0005548915.novalocal sudo[6820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:36 np0005548915.novalocal python3[6822]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011696.3556497-451-150350129668605/source _original_basename=tmpvcls7vjg follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:36 np0005548915.novalocal sudo[6820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:37 np0005548915.novalocal sudo[6871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwuuyzooaakqjswfqccmtpneagglzzs ; /usr/bin/python3'
Dec 06 09:01:37 np0005548915.novalocal sudo[6871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:37 np0005548915.novalocal python3[6873]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-c2c1-5ee8-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:01:37 np0005548915.novalocal sudo[6871]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:38 np0005548915.novalocal python3[6901]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-c2c1-5ee8-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 06 09:01:39 np0005548915.novalocal python3[6929]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:58 np0005548915.novalocal sudo[6953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdzsqpvgldehbdhmhsiwsqayozwqiax ; /usr/bin/python3'
Dec 06 09:01:58 np0005548915.novalocal sudo[6953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:01:58 np0005548915.novalocal python3[6955]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:01:58 np0005548915.novalocal sudo[6953]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:01 np0005548915.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 06 09:02:41 np0005548915.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 06 09:02:41 np0005548915.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1791] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 09:02:41 np0005548915.novalocal systemd-udevd[6959]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1928] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1953] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1958] device (eth1): carrier: link connected
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1960] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1966] policy: auto-activating connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1971] device (eth1): Activation: starting connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1976] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1981] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:02:41 np0005548915.novalocal NetworkManager[855]: <info>  [1765011761.1986] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:02:42 np0005548915.novalocal python3[6985]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-5a9f-9569-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:02:51 np0005548915.novalocal sudo[7063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjwkqnexlixbsernbsvcftrzlzspbfkk ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 09:02:51 np0005548915.novalocal sudo[7063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:02:52 np0005548915.novalocal python3[7065]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:02:52 np0005548915.novalocal sudo[7063]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:52 np0005548915.novalocal sudo[7136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijxvvhkyknokqfkphjswiildtxtyetu ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 09:02:52 np0005548915.novalocal sudo[7136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:02:52 np0005548915.novalocal python3[7138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765011771.7652702-104-183839944671597/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=009003d5d114e1477e06615c5dca6e1028e76f02 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:02:52 np0005548915.novalocal sudo[7136]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:52 np0005548915.novalocal sudo[7186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvxjftupbtqhdgzomwtexrgzwbevbuij ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 09:02:52 np0005548915.novalocal sudo[7186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:02:53 np0005548915.novalocal python3[7188]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2541] caught SIGTERM, shutting down normally.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Stopping Network Manager...
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): canceled DHCP transaction
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2550] dhcp4 (eth0): state changed no lease
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2554] manager: NetworkManager state is now CONNECTING
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2705] dhcp4 (eth1): canceled DHCP transaction
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2705] dhcp4 (eth1): state changed no lease
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[855]: <info>  [1765011773.2762] exiting (success)
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Stopped Network Manager.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Starting Network Manager...
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.3419] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.3422] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.3474] manager[0x555cfe7a2070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Starting Hostname Service...
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Started Hostname Service.
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4245] hostname: hostname: using hostnamed
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4246] hostname: static hostname changed from (none) to "np0005548915.novalocal"
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4252] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4259] manager[0x555cfe7a2070]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4259] manager[0x555cfe7a2070]: rfkill: WWAN hardware radio set enabled
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4291] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4291] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4292] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4293] manager: Networking is enabled by state file
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4296] settings: Loaded settings plugin: keyfile (internal)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4300] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4325] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4333] dhcp: init: Using DHCP client 'internal'
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4336] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4340] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4345] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4352] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4358] device (eth0): carrier: link connected
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4362] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4366] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4366] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4371] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4377] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4382] device (eth1): carrier: link connected
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4386] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4390] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7) (indicated)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4390] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4395] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4400] device (eth1): Activation: starting connection 'Wired connection 1' (801d2662-229c-3ec2-ab7b-8017b4489ad7)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4407] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Started Network Manager.
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4411] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4412] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4414] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4416] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4418] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4420] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4422] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4424] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4429] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4431] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4438] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4440] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4463] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4468] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4529] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4537] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4538] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4544] device (lo): Activation: successful, device activated.
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4565] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4567] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4570] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4574] device (eth0): Activation: successful, device activated.
Dec 06 09:02:53 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011773.4579] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 09:02:53 np0005548915.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 09:02:53 np0005548915.novalocal sudo[7186]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:53 np0005548915.novalocal python3[7272]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-5a9f-9569-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:03:03 np0005548915.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:03:23 np0005548915.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.4583] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 09:03:38 np0005548915.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:03:38 np0005548915.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5026] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5031] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5043] device (eth1): Activation: successful, device activated.
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5055] manager: startup complete
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5059] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <warn>  [1765011818.5070] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5082] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5185] dhcp4 (eth1): canceled DHCP transaction
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5187] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5189] dhcp4 (eth1): state changed no lease
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5217] policy: auto-activating connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5225] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5228] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5233] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5245] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5262] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5308] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5312] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:03:38 np0005548915.novalocal NetworkManager[7201]: <info>  [1765011818.5323] device (eth1): Activation: successful, device activated.
Dec 06 09:03:48 np0005548915.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:03:51 np0005548915.novalocal systemd[4299]: Starting Mark boot as successful...
Dec 06 09:03:51 np0005548915.novalocal systemd[4299]: Finished Mark boot as successful.
Dec 06 09:03:53 np0005548915.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 52434:11: disconnected by user
Dec 06 09:03:53 np0005548915.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 52434
Dec 06 09:03:53 np0005548915.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:03:53 np0005548915.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Dec 06 09:04:51 np0005548915.novalocal sshd-session[7301]: Accepted publickey for zuul from 38.102.83.114 port 35308 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:04:51 np0005548915.novalocal systemd-logind[795]: New session 3 of user zuul.
Dec 06 09:04:51 np0005548915.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 06 09:04:51 np0005548915.novalocal sshd-session[7301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:04:51 np0005548915.novalocal sudo[7380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyljyrlcxhbhwyudkesubsrpxxbvtxdn ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 09:04:51 np0005548915.novalocal sudo[7380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:04:51 np0005548915.novalocal python3[7382]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:04:51 np0005548915.novalocal sudo[7380]: pam_unix(sudo:session): session closed for user root
Dec 06 09:04:51 np0005548915.novalocal sudo[7453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sscgauwyllfbntvlycoleogsopfnebsc ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 09:04:51 np0005548915.novalocal sudo[7453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:04:51 np0005548915.novalocal python3[7455]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765011891.135714-373-108233033013812/source _original_basename=tmptpftk2ug follow=False checksum=81d87914000d1f03e4ba3a0a6e4eda468c65f433 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:04:51 np0005548915.novalocal sudo[7453]: pam_unix(sudo:session): session closed for user root
Dec 06 09:04:55 np0005548915.novalocal sshd-session[7304]: Connection closed by 38.102.83.114 port 35308
Dec 06 09:04:55 np0005548915.novalocal sshd-session[7301]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:04:55 np0005548915.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 06 09:04:55 np0005548915.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Dec 06 09:04:55 np0005548915.novalocal systemd-logind[795]: Removed session 3.
Dec 06 09:06:51 np0005548915.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Dec 06 09:06:51 np0005548915.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 09:06:51 np0005548915.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 09:10:24 np0005548915.novalocal sshd-session[7486]: Accepted publickey for zuul from 38.102.83.114 port 47166 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:10:24 np0005548915.novalocal systemd-logind[795]: New session 4 of user zuul.
Dec 06 09:10:24 np0005548915.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 06 09:10:24 np0005548915.novalocal sshd-session[7486]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:10:24 np0005548915.novalocal sudo[7513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvhqnagaqocptnkesmaumfdaxcaoczev ; /usr/bin/python3'
Dec 06 09:10:24 np0005548915.novalocal sudo[7513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:24 np0005548915.novalocal python3[7515]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-6aeb-b52e-000000001cd4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:24 np0005548915.novalocal sudo[7513]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:25 np0005548915.novalocal sudo[7542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alzodurcoylfxwkmwnibgfhexorlarbx ; /usr/bin/python3'
Dec 06 09:10:25 np0005548915.novalocal sudo[7542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:25 np0005548915.novalocal python3[7544]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:25 np0005548915.novalocal sudo[7542]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:25 np0005548915.novalocal sudo[7568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbbpycktrmhrpoxtyerssjejkhoxtjmw ; /usr/bin/python3'
Dec 06 09:10:25 np0005548915.novalocal sudo[7568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:25 np0005548915.novalocal python3[7570]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:25 np0005548915.novalocal sudo[7568]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:25 np0005548915.novalocal sudo[7594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzoruwygchberaqcnmchpdkkwmuanrm ; /usr/bin/python3'
Dec 06 09:10:25 np0005548915.novalocal sudo[7594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:25 np0005548915.novalocal python3[7596]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:25 np0005548915.novalocal sudo[7594]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:25 np0005548915.novalocal sudo[7620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayllyhnxuuonxnhzjhwzbayfemnpxhlc ; /usr/bin/python3'
Dec 06 09:10:25 np0005548915.novalocal sudo[7620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:25 np0005548915.novalocal python3[7622]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:25 np0005548915.novalocal sudo[7620]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:26 np0005548915.novalocal sudo[7646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iesgtgqpoyjvfvvxodwfuuicxjmprgaj ; /usr/bin/python3'
Dec 06 09:10:26 np0005548915.novalocal sudo[7646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:26 np0005548915.novalocal python3[7648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:26 np0005548915.novalocal sudo[7646]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:27 np0005548915.novalocal sudo[7724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unipudtihaxkqarozpptdgrfmsaqdvfr ; /usr/bin/python3'
Dec 06 09:10:27 np0005548915.novalocal sudo[7724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:27 np0005548915.novalocal python3[7726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:10:27 np0005548915.novalocal sudo[7724]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:27 np0005548915.novalocal sudo[7797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjsuzxbzuxkwdebtomxgeedffbtixmcy ; /usr/bin/python3'
Dec 06 09:10:27 np0005548915.novalocal sudo[7797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:27 np0005548915.novalocal python3[7799]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765012227.1483302-516-255246545221629/source _original_basename=tmp5rewaca3 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:10:27 np0005548915.novalocal sudo[7797]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:28 np0005548915.novalocal sudo[7847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dybmrrugneigkxnwzjrsxepvlqsgpxeh ; /usr/bin/python3'
Dec 06 09:10:28 np0005548915.novalocal sudo[7847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:28 np0005548915.novalocal python3[7849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:10:29 np0005548915.novalocal systemd[1]: Reloading.
Dec 06 09:10:29 np0005548915.novalocal systemd-rc-local-generator[7870]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:10:29 np0005548915.novalocal sudo[7847]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:30 np0005548915.novalocal sudo[7903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-howgdrmjnxqaiuconjdaoabbpdwpctyn ; /usr/bin/python3'
Dec 06 09:10:30 np0005548915.novalocal sudo[7903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:30 np0005548915.novalocal python3[7905]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 06 09:10:30 np0005548915.novalocal sudo[7903]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:31 np0005548915.novalocal sudo[7929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oknfggxwhvchftaanymyhethdxniboiw ; /usr/bin/python3'
Dec 06 09:10:31 np0005548915.novalocal sudo[7929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:31 np0005548915.novalocal python3[7931]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:31 np0005548915.novalocal sudo[7929]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:31 np0005548915.novalocal sudo[7957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euzrhubpqpzdvtkccdbdlgwgztefknul ; /usr/bin/python3'
Dec 06 09:10:31 np0005548915.novalocal sudo[7957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:31 np0005548915.novalocal python3[7959]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:31 np0005548915.novalocal sudo[7957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:31 np0005548915.novalocal sudo[7985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbusmrdihiqfjzwglknlacaxioffxuw ; /usr/bin/python3'
Dec 06 09:10:31 np0005548915.novalocal sudo[7985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:32 np0005548915.novalocal python3[7987]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:32 np0005548915.novalocal sudo[7985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:32 np0005548915.novalocal sudo[8013]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssesjbskjrqbdhqeuebxdmojxvunrlgr ; /usr/bin/python3'
Dec 06 09:10:32 np0005548915.novalocal sudo[8013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:32 np0005548915.novalocal python3[8015]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:32 np0005548915.novalocal sudo[8013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:33 np0005548915.novalocal python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-6aeb-b52e-000000001cdb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:10:33 np0005548915.novalocal python3[8072]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:10:36 np0005548915.novalocal sshd-session[7489]: Connection closed by 38.102.83.114 port 47166
Dec 06 09:10:36 np0005548915.novalocal sshd-session[7486]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:10:36 np0005548915.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 06 09:10:36 np0005548915.novalocal systemd[1]: session-4.scope: Consumed 3.749s CPU time.
Dec 06 09:10:36 np0005548915.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Dec 06 09:10:36 np0005548915.novalocal systemd-logind[795]: Removed session 4.
Dec 06 09:10:38 np0005548915.novalocal sshd-session[8077]: Accepted publickey for zuul from 38.102.83.114 port 57762 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:10:38 np0005548915.novalocal systemd-logind[795]: New session 5 of user zuul.
Dec 06 09:10:38 np0005548915.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 06 09:10:38 np0005548915.novalocal sshd-session[8077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:10:38 np0005548915.novalocal sudo[8104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppnvycqvohbwapzsuzfxnlmpwdlopqwx ; /usr/bin/python3'
Dec 06 09:10:38 np0005548915.novalocal sudo[8104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:10:38 np0005548915.novalocal python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:10:54 np0005548915.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:11:03 np0005548915.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:11:12 np0005548915.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:11:13 np0005548915.novalocal setsebool[8172]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 06 09:11:13 np0005548915.novalocal setsebool[8172]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:11:24 np0005548915.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:11:41 np0005548915.novalocal dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 06 09:11:41 np0005548915.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:11:41 np0005548915.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:11:41 np0005548915.novalocal systemd[1]: Reloading.
Dec 06 09:11:42 np0005548915.novalocal systemd-rc-local-generator[8921]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:11:42 np0005548915.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:11:43 np0005548915.novalocal sudo[8104]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:45 np0005548915.novalocal irqbalance[788]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 06 09:11:45 np0005548915.novalocal irqbalance[788]: IRQ 27 affinity is now unmanaged
Dec 06 09:11:47 np0005548915.novalocal python3[13464]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-d561-0a5b-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:11:48 np0005548915.novalocal kernel: evm: overlay not supported
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Dec 06 09:11:48 np0005548915.novalocal dbus-broker-launch[14004]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 06 09:11:48 np0005548915.novalocal dbus-broker-launch[14004]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: Started D-Bus User Message Bus.
Dec 06 09:11:48 np0005548915.novalocal dbus-broker-lau[14004]: Ready
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: Created slice Slice /user.
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: podman-13950.scope: unit configures an IP firewall, but not running as root.
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: Started podman-13950.scope.
Dec 06 09:11:48 np0005548915.novalocal systemd[4299]: Started podman-pause-012e1fba.scope.
Dec 06 09:11:49 np0005548915.novalocal sshd-session[8080]: Connection closed by 38.102.83.114 port 57762
Dec 06 09:11:49 np0005548915.novalocal sshd-session[8077]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:11:49 np0005548915.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 06 09:11:49 np0005548915.novalocal systemd[1]: session-5.scope: Consumed 58.509s CPU time.
Dec 06 09:11:49 np0005548915.novalocal systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Dec 06 09:11:49 np0005548915.novalocal systemd-logind[795]: Removed session 5.
Dec 06 09:12:13 np0005548915.novalocal sshd-session[24832]: Connection closed by 38.102.83.98 port 42288 [preauth]
Dec 06 09:12:13 np0005548915.novalocal sshd-session[24834]: Connection closed by 38.102.83.98 port 42298 [preauth]
Dec 06 09:12:13 np0005548915.novalocal sshd-session[24838]: Unable to negotiate with 38.102.83.98 port 42314: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 06 09:12:13 np0005548915.novalocal sshd-session[24839]: Unable to negotiate with 38.102.83.98 port 42328: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 06 09:12:13 np0005548915.novalocal sshd-session[24836]: Unable to negotiate with 38.102.83.98 port 42342: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 06 09:12:18 np0005548915.novalocal sshd-session[27023]: Accepted publickey for zuul from 38.102.83.114 port 47244 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:12:18 np0005548915.novalocal systemd-logind[795]: New session 6 of user zuul.
Dec 06 09:12:18 np0005548915.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 06 09:12:18 np0005548915.novalocal sshd-session[27023]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:12:18 np0005548915.novalocal python3[27126]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:12:18 np0005548915.novalocal sudo[27350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iykivlsqrvmzicwpyjawnepgxughevfs ; /usr/bin/python3'
Dec 06 09:12:18 np0005548915.novalocal sudo[27350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:19 np0005548915.novalocal python3[27363]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:12:19 np0005548915.novalocal sudo[27350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:19 np0005548915.novalocal sudo[27803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxryzyyxoulsrrlbvpvrqhybpeirmwz ; /usr/bin/python3'
Dec 06 09:12:19 np0005548915.novalocal sudo[27803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:20 np0005548915.novalocal python3[27813]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548915.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 06 09:12:20 np0005548915.novalocal useradd[27883]: new group: name=cloud-admin, GID=1002
Dec 06 09:12:20 np0005548915.novalocal useradd[27883]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 06 09:12:20 np0005548915.novalocal sudo[27803]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:20 np0005548915.novalocal sudo[28053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrqutyfuuqgdjemufdnpbtvljgyizlia ; /usr/bin/python3'
Dec 06 09:12:20 np0005548915.novalocal sudo[28053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:20 np0005548915.novalocal python3[28062]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK/b/hDus+zgErbxpiAu4axJ55LMjNixMhoE4DoEU6Wq/xn30MdVWwMPMhgQamY6n3JqihnzwOz1OzKhBTCdzls= zuul@np0005548914.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 09:12:20 np0005548915.novalocal sudo[28053]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:20 np0005548915.novalocal sudo[28383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucsvahybydxbdyxfnpdqxtnahegykzi ; /usr/bin/python3'
Dec 06 09:12:20 np0005548915.novalocal sudo[28383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:21 np0005548915.novalocal python3[28392]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:12:21 np0005548915.novalocal sudo[28383]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:21 np0005548915.novalocal sudo[28703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdcqavxdkiwcqgxivltqaceivlepapcz ; /usr/bin/python3'
Dec 06 09:12:21 np0005548915.novalocal sudo[28703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:21 np0005548915.novalocal python3[28713]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765012340.8112967-151-75733541540141/source _original_basename=tmpx9ak07cc follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:12:21 np0005548915.novalocal sudo[28703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:22 np0005548915.novalocal sudo[29059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzviejmvkniimmubgjgttskovudeovv ; /usr/bin/python3'
Dec 06 09:12:22 np0005548915.novalocal sudo[29059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:12:22 np0005548915.novalocal python3[29061]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 06 09:12:22 np0005548915.novalocal systemd[1]: Starting Hostname Service...
Dec 06 09:12:22 np0005548915.novalocal systemd[1]: Started Hostname Service.
Dec 06 09:12:22 np0005548915.novalocal systemd-hostnamed[29068]: Changed pretty hostname to 'compute-0'
Dec 06 09:12:22 compute-0 systemd-hostnamed[29068]: Hostname set to <compute-0> (static)
Dec 06 09:12:22 compute-0 NetworkManager[7201]: <info>  [1765012342.8039] hostname: static hostname changed from "np0005548915.novalocal" to "compute-0"
Dec 06 09:12:22 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:12:22 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:12:22 compute-0 sudo[29059]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:23 compute-0 sshd-session[27065]: Connection closed by 38.102.83.114 port 47244
Dec 06 09:12:23 compute-0 sshd-session[27023]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:12:23 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 06 09:12:23 compute-0 systemd[1]: session-6.scope: Consumed 2.143s CPU time.
Dec 06 09:12:23 compute-0 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Dec 06 09:12:23 compute-0 systemd-logind[795]: Removed session 6.
Dec 06 09:12:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:12:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:12:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 51.312s CPU time.
Dec 06 09:12:25 compute-0 systemd[1]: run-rb1d08a7c17d54d82a6dd6b5e414b4676.service: Deactivated successfully.
Dec 06 09:12:32 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:12:52 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 09:15:51 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 06 09:15:51 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 06 09:15:51 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 06 09:15:51 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 06 09:16:07 compute-0 sshd-session[29903]: Accepted publickey for zuul from 38.102.83.98 port 39394 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:16:07 compute-0 systemd-logind[795]: New session 7 of user zuul.
Dec 06 09:16:07 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 06 09:16:07 compute-0 sshd-session[29903]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:16:07 compute-0 python3[29979]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:16:10 compute-0 sudo[30093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfcncozsflkbgydkufciqcpbnfhzfmla ; /usr/bin/python3'
Dec 06 09:16:10 compute-0 sudo[30093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:10 compute-0 python3[30095]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:10 compute-0 sudo[30093]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:10 compute-0 sudo[30166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcmrmejghrkmxhejhvbyfzsvpnkejrad ; /usr/bin/python3'
Dec 06 09:16:10 compute-0 sudo[30166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:10 compute-0 python3[30168]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:10 compute-0 sudo[30166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:11 compute-0 sudo[30192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqukmfzorpiwnfkfaugqdjqdhrellrb ; /usr/bin/python3'
Dec 06 09:16:11 compute-0 sudo[30192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:11 compute-0 python3[30194]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:11 compute-0 sudo[30192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:11 compute-0 sudo[30265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmuzrjovfenbgsslbqcynkiicswihjce ; /usr/bin/python3'
Dec 06 09:16:11 compute-0 sudo[30265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:11 compute-0 python3[30267]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:11 compute-0 sudo[30265]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:11 compute-0 sudo[30291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-creumwqltcrddegsconltsokdrvflayz ; /usr/bin/python3'
Dec 06 09:16:11 compute-0 sudo[30291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:11 compute-0 python3[30293]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:11 compute-0 sudo[30291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:11 compute-0 sudo[30364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkjxwlnzallxckqpxktyyrzcohhwrwd ; /usr/bin/python3'
Dec 06 09:16:11 compute-0 sudo[30364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:12 compute-0 python3[30366]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:12 compute-0 sudo[30364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:12 compute-0 sudo[30390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anyimkfcvfedqaigtaertiofcgnsnbzl ; /usr/bin/python3'
Dec 06 09:16:12 compute-0 sudo[30390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:12 compute-0 python3[30392]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:12 compute-0 sudo[30390]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:12 compute-0 sudo[30463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxizfaxpennyeflkdqfkhvgrkyhdshr ; /usr/bin/python3'
Dec 06 09:16:12 compute-0 sudo[30463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:12 compute-0 python3[30465]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:12 compute-0 sudo[30463]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:12 compute-0 sudo[30489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthfzynnsoncfswktjubykynlvyhaybh ; /usr/bin/python3'
Dec 06 09:16:12 compute-0 sudo[30489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:12 compute-0 python3[30491]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:12 compute-0 sudo[30489]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:13 compute-0 sudo[30564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dianvcqunhqbsxsgombebliautvddgfw ; /usr/bin/python3'
Dec 06 09:16:13 compute-0 sudo[30564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:13 compute-0 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:13 compute-0 sudo[30564]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:13 compute-0 sudo[30590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezynvihbanzomhplpkkuhewlkrkbkcpb ; /usr/bin/python3'
Dec 06 09:16:13 compute-0 sudo[30590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:13 compute-0 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:13 compute-0 sudo[30590]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:13 compute-0 sudo[30663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trdbiplbsdixqswdzonmfaaaithbahgp ; /usr/bin/python3'
Dec 06 09:16:13 compute-0 sudo[30663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:13 compute-0 sshd-session[30522]: Received disconnect from 193.46.255.103 port 38208:11:  [preauth]
Dec 06 09:16:13 compute-0 sshd-session[30522]: Disconnected from authenticating user root 193.46.255.103 port 38208 [preauth]
Dec 06 09:16:13 compute-0 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:13 compute-0 sudo[30663]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:13 compute-0 sudo[30689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebzyxojetnbgtuymkrygnybatnqczvs ; /usr/bin/python3'
Dec 06 09:16:13 compute-0 sudo[30689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:13 compute-0 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:16:14 compute-0 sudo[30689]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:14 compute-0 sudo[30762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtipuquhqycmilfikykkmqlcxocuvoee ; /usr/bin/python3'
Dec 06 09:16:14 compute-0 sudo[30762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:14 compute-0 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765012570.1579626-33924-160185641293870/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:16:14 compute-0 sudo[30762]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:17 compute-0 sshd-session[30789]: Connection closed by 192.168.122.11 port 44666 [preauth]
Dec 06 09:16:17 compute-0 sshd-session[30790]: Connection closed by 192.168.122.11 port 44678 [preauth]
Dec 06 09:16:17 compute-0 sshd-session[30791]: Unable to negotiate with 192.168.122.11 port 44680: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 06 09:16:17 compute-0 sshd-session[30793]: Unable to negotiate with 192.168.122.11 port 44684: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 06 09:16:17 compute-0 sshd-session[30794]: Unable to negotiate with 192.168.122.11 port 44686: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 06 09:16:26 compute-0 python3[30822]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:20:34 compute-0 sshd-session[30827]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 09:20:34 compute-0 sshd-session[30827]: Connection reset by 45.140.17.97 port 39668
Dec 06 09:21:26 compute-0 sshd-session[29906]: Received disconnect from 38.102.83.98 port 39394:11: disconnected by user
Dec 06 09:21:26 compute-0 sshd-session[29906]: Disconnected from user zuul 38.102.83.98 port 39394
Dec 06 09:21:26 compute-0 sshd-session[29903]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:21:26 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 06 09:21:26 compute-0 systemd[1]: session-7.scope: Consumed 4.753s CPU time.
Dec 06 09:21:26 compute-0 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Dec 06 09:21:26 compute-0 systemd-logind[795]: Removed session 7.
Dec 06 09:27:56 compute-0 sshd-session[30833]: Accepted publickey for zuul from 192.168.122.30 port 60320 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:27:56 compute-0 systemd-logind[795]: New session 8 of user zuul.
Dec 06 09:27:56 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 06 09:27:56 compute-0 sshd-session[30833]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:27:57 compute-0 python3.9[30987]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:27:58 compute-0 sudo[31166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskxbzrdnutcfjnpdrlolmvwnvzeqgpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013278.331058-56-101606814601656/AnsiballZ_command.py'
Dec 06 09:27:58 compute-0 sudo[31166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:58 compute-0 python3.9[31168]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:05 compute-0 sudo[31166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:06 compute-0 sshd-session[30836]: Connection closed by 192.168.122.30 port 60320
Dec 06 09:28:06 compute-0 sshd-session[30833]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:28:06 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 06 09:28:06 compute-0 systemd[1]: session-8.scope: Consumed 7.283s CPU time.
Dec 06 09:28:06 compute-0 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Dec 06 09:28:06 compute-0 systemd-logind[795]: Removed session 8.
Dec 06 09:28:22 compute-0 sshd-session[31225]: Accepted publickey for zuul from 192.168.122.30 port 55352 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:28:22 compute-0 systemd-logind[795]: New session 9 of user zuul.
Dec 06 09:28:22 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 06 09:28:22 compute-0 sshd-session[31225]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:28:22 compute-0 python3.9[31378]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 09:28:24 compute-0 python3.9[31552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:24 compute-0 sudo[31702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtoiisgkurclgbyylphzoatzoicpjodw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013304.471234-93-36303738315214/AnsiballZ_command.py'
Dec 06 09:28:24 compute-0 sudo[31702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:25 compute-0 python3.9[31704]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:25 compute-0 sudo[31702]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:25 compute-0 sudo[31855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyycnolczoguygrjpgirtgemwkyrwsdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013305.5114608-129-250039084126380/AnsiballZ_stat.py'
Dec 06 09:28:25 compute-0 sudo[31855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:26 compute-0 python3.9[31857]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:28:26 compute-0 sudo[31855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:26 compute-0 sudo[32007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzaeuajixzfxpqodfknzsclmmwqsmnqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013306.3142755-153-189760512433574/AnsiballZ_file.py'
Dec 06 09:28:26 compute-0 sudo[32007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:27 compute-0 python3.9[32009]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:28:27 compute-0 sudo[32007]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:27 compute-0 sudo[32159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgrrxlwhistvwggjdgrobmhggzabmpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013307.3247175-177-225592901403469/AnsiballZ_stat.py'
Dec 06 09:28:27 compute-0 sudo[32159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:27 compute-0 python3.9[32161]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:28:27 compute-0 sudo[32159]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:28 compute-0 sudo[32282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faondwmuhntggvklaapanassaznhztww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013307.3247175-177-225592901403469/AnsiballZ_copy.py'
Dec 06 09:28:28 compute-0 sudo[32282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:28 compute-0 python3.9[32284]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013307.3247175-177-225592901403469/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:28:28 compute-0 sudo[32282]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:29 compute-0 sudo[32434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhbhhickcjcrrkqmbbvyvxrppgjylhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013308.7391667-222-32074096532096/AnsiballZ_setup.py'
Dec 06 09:28:29 compute-0 sudo[32434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:29 compute-0 python3.9[32436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:29 compute-0 sudo[32434]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:30 compute-0 sudo[32590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsblvkanwliedyulgptrswjqycgscrpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013309.737053-246-86847671343548/AnsiballZ_file.py'
Dec 06 09:28:30 compute-0 sudo[32590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:30 compute-0 python3.9[32592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:30 compute-0 sudo[32590]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:30 compute-0 sudo[32742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrpgtlaaggvbekjypvhcjgmibdlgjwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013310.5872948-273-38032067292604/AnsiballZ_file.py'
Dec 06 09:28:30 compute-0 sudo[32742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:31 compute-0 python3.9[32744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:31 compute-0 sudo[32742]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:32 compute-0 python3.9[32894]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:28:39 compute-0 python3.9[33147]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:28:40 compute-0 python3.9[33297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:41 compute-0 python3.9[33451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:42 compute-0 sudo[33607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxzyoxnvhhhasqstexyhiwrfsrdlmbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013322.1621387-417-33999332127845/AnsiballZ_setup.py'
Dec 06 09:28:42 compute-0 sudo[33607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:42 compute-0 python3.9[33609]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:28:43 compute-0 sudo[33607]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:43 compute-0 sudo[33691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssvqurdrfnidogvcskdkegmfwgnffun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013322.1621387-417-33999332127845/AnsiballZ_dnf.py'
Dec 06 09:28:43 compute-0 sudo[33691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:43 compute-0 python3.9[33693]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:28:47 compute-0 sshd-session[33757]: Received disconnect from 193.46.255.103 port 41520:11:  [preauth]
Dec 06 09:28:47 compute-0 sshd-session[33757]: Disconnected from authenticating user root 193.46.255.103 port 41520 [preauth]
Dec 06 09:29:02 compute-0 anacron[4456]: Job `cron.daily' started
Dec 06 09:29:02 compute-0 anacron[4456]: Job `cron.daily' terminated
Dec 06 09:29:24 compute-0 systemd[1]: Reloading.
Dec 06 09:29:24 compute-0 systemd-rc-local-generator[33891]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:29:25 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 06 09:29:25 compute-0 systemd[1]: Reloading.
Dec 06 09:29:25 compute-0 systemd-rc-local-generator[33935]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:29:25 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 06 09:29:25 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 06 09:29:25 compute-0 systemd[1]: Reloading.
Dec 06 09:29:25 compute-0 systemd-rc-local-generator[33971]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:29:26 compute-0 systemd[1]: Starting dnf makecache...
Dec 06 09:29:26 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 06 09:29:26 compute-0 dnf[33983]: Failed determining last makecache time.
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-barbican-42b4c41831408a8e323 129 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:29:26 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:29:26 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 128 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-cinder-1c00d6490d88e436f26ef 147 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-stevedore-c4acc5639fd2329372142 177 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-cloudkitty-tests-tempest-2c80f8 140 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 150 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 133 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-designate-tests-tempest-347fdbc 150 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-glance-1fd12c29b339f30fe823e 151 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 146 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-manila-3c01b7181572c95dac462 157 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-whitebox-neutron-tests-tempest- 156 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-octavia-ba397f07a7331190208c 159 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-watcher-c014f81a8647287f6dcc 164 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-ansible-config_template-5ccaa22121a7ff 157 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 156 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-swift-dc98a8463506ac520c469a 148 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-python-tempestconf-8515371b7cceebd4282 134 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: delorean-openstack-heat-ui-013accbfd179753bc3f0 132 kB/s | 3.0 kB     00:00
Dec 06 09:29:26 compute-0 dnf[33983]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: CentOS Stream 9 - AppStream                      85 kB/s | 7.4 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: CentOS Stream 9 - CRB                            30 kB/s | 7.2 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: dlrn-antelope-testing                            92 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: dlrn-antelope-build-deps                         94 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: centos9-rabbitmq                                 84 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: centos9-storage                                 106 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: centos9-opstools                                129 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: NFV SIG OpenvSwitch                             143 kB/s | 3.0 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: repo-setup-centos-appstream                     160 kB/s | 4.4 kB     00:00
Dec 06 09:29:27 compute-0 dnf[33983]: repo-setup-centos-baseos                        168 kB/s | 3.9 kB     00:00
Dec 06 09:29:28 compute-0 dnf[33983]: repo-setup-centos-highavailability              177 kB/s | 3.9 kB     00:00
Dec 06 09:29:28 compute-0 dnf[33983]: repo-setup-centos-powertools                     56 kB/s | 4.3 kB     00:00
Dec 06 09:29:28 compute-0 dnf[33983]: Extra Packages for Enterprise Linux 9 - x86_64  159 kB/s |  32 kB     00:00
Dec 06 09:29:28 compute-0 dnf[33983]: Metadata cache created.
Dec 06 09:29:28 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 09:29:28 compute-0 systemd[1]: Finished dnf makecache.
Dec 06 09:29:28 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.844s CPU time.
Dec 06 09:30:31 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:30:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:30:31 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 06 09:30:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:30:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:30:32 compute-0 systemd[1]: Reloading.
Dec 06 09:30:32 compute-0 systemd-rc-local-generator[34348]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:30:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:30:32 compute-0 sudo[33691]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:30:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:30:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.327s CPU time.
Dec 06 09:30:33 compute-0 systemd[1]: run-r3c3800aad8f24a4f90c8931f8ecee67f.service: Deactivated successfully.
Dec 06 09:30:45 compute-0 sudo[35262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocvukuuijslzrywnwsqltclycnqaiup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013445.131837-453-10704524769331/AnsiballZ_command.py'
Dec 06 09:30:45 compute-0 sudo[35262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:45 compute-0 python3.9[35264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:30:46 compute-0 sudo[35262]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:47 compute-0 sudo[35543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywikoiqzgsgoiaruufrkbupksizedbgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013447.1802986-477-199132115849700/AnsiballZ_selinux.py'
Dec 06 09:30:47 compute-0 sudo[35543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:48 compute-0 python3.9[35545]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 09:30:48 compute-0 sudo[35543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:49 compute-0 sudo[35695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgmjqmndneykqsrsqnpocebxpvbaldj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013448.8287437-510-30325704432062/AnsiballZ_command.py'
Dec 06 09:30:49 compute-0 sudo[35695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:49 compute-0 python3.9[35697]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 09:30:50 compute-0 sudo[35695]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:54 compute-0 sudo[35848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmnzxfehipyobruvpptrfbafhdikswcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013453.9154842-534-125085389494108/AnsiballZ_file.py'
Dec 06 09:30:54 compute-0 sudo[35848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:54 compute-0 python3.9[35850]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:54 compute-0 sudo[35848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:55 compute-0 sudo[36000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrtvpjsakxntwrpnuihkmbbjviosefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013455.388404-558-93721073832860/AnsiballZ_mount.py'
Dec 06 09:30:55 compute-0 sudo[36000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:58 compute-0 python3.9[36002]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 09:30:58 compute-0 sudo[36000]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:00 compute-0 sudo[36153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwupodlrnelxrypmvvffuywbteyfrkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013459.8146744-642-187866116674377/AnsiballZ_file.py'
Dec 06 09:31:00 compute-0 sudo[36153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:00 compute-0 python3.9[36155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:00 compute-0 sudo[36153]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:05 compute-0 sudo[36305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whoggzcunyesdojfvmehjvnwraqaujoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013464.6928537-666-246355898964676/AnsiballZ_stat.py'
Dec 06 09:31:05 compute-0 sudo[36305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:05 compute-0 irqbalance[788]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 06 09:31:05 compute-0 irqbalance[788]: IRQ 26 affinity is now unmanaged
Dec 06 09:31:08 compute-0 python3.9[36307]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:08 compute-0 sudo[36305]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:09 compute-0 sudo[36428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsidjhqsedkryjalcdyrceskqhsdtnfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013464.6928537-666-246355898964676/AnsiballZ_copy.py'
Dec 06 09:31:09 compute-0 sudo[36428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:09 compute-0 python3.9[36430]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013464.6928537-666-246355898964676/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:09 compute-0 sudo[36428]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:10 compute-0 sudo[36580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmnuzlaqshslweqpuehwyvqevdusfrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013470.5293708-738-87785368784731/AnsiballZ_stat.py'
Dec 06 09:31:10 compute-0 sudo[36580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:11 compute-0 python3.9[36582]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:31:11 compute-0 sudo[36580]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:11 compute-0 sudo[36732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupcrceajqanicxhtwpgnojehxcmratm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013471.3722293-762-179529353690587/AnsiballZ_command.py'
Dec 06 09:31:11 compute-0 sudo[36732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:11 compute-0 python3.9[36734]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:12 compute-0 sudo[36732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:12 compute-0 sudo[36885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krnhdlpcijbwgcvmkwkpxdwtbmqbqoaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013472.3581808-786-182124496551536/AnsiballZ_file.py'
Dec 06 09:31:12 compute-0 sudo[36885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:12 compute-0 python3.9[36887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:12 compute-0 sudo[36885]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:13 compute-0 sudo[37037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzitvlttlinngfgubhjgwgrxryrtgcxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013473.4649804-819-107838567024779/AnsiballZ_getent.py'
Dec 06 09:31:13 compute-0 sudo[37037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:14 compute-0 python3.9[37039]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 09:31:14 compute-0 sudo[37037]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:14 compute-0 sudo[37190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopaafgwnfspfgxfcroxgwsnfadoyzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013474.3449779-843-169979233390342/AnsiballZ_group.py'
Dec 06 09:31:14 compute-0 sudo[37190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:15 compute-0 python3.9[37192]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:31:15 compute-0 groupadd[37193]: group added to /etc/group: name=qemu, GID=107
Dec 06 09:31:15 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:31:15 compute-0 groupadd[37193]: group added to /etc/gshadow: name=qemu
Dec 06 09:31:15 compute-0 groupadd[37193]: new group: name=qemu, GID=107
Dec 06 09:31:15 compute-0 sudo[37190]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:15 compute-0 sudo[37349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkfvbznvjewogqiubxoigmbzwkessdqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013475.3639896-867-199191345910632/AnsiballZ_user.py'
Dec 06 09:31:15 compute-0 sudo[37349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:16 compute-0 python3.9[37351]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:31:16 compute-0 useradd[37353]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 09:31:16 compute-0 sudo[37349]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:16 compute-0 sudo[37509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqahuyorgwzmvkkuquwbavbxgyzpjmup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013476.5501995-891-265120173616959/AnsiballZ_getent.py'
Dec 06 09:31:16 compute-0 sudo[37509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:17 compute-0 python3.9[37511]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 09:31:17 compute-0 sudo[37509]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:17 compute-0 sudo[37662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytkosykjzznvlqtexlrcqcfnfjedlov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013477.4699752-915-28386782476476/AnsiballZ_group.py'
Dec 06 09:31:17 compute-0 sudo[37662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:18 compute-0 python3.9[37664]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:31:18 compute-0 groupadd[37665]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 06 09:31:18 compute-0 groupadd[37665]: group added to /etc/gshadow: name=hugetlbfs
Dec 06 09:31:18 compute-0 groupadd[37665]: new group: name=hugetlbfs, GID=42477
Dec 06 09:31:18 compute-0 sudo[37662]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:18 compute-0 sudo[37820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvjwwqtpbkolevxckxppgnhxplpwwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013478.4788303-942-12560101616420/AnsiballZ_file.py'
Dec 06 09:31:18 compute-0 sudo[37820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:18 compute-0 python3.9[37822]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 09:31:19 compute-0 sudo[37820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:19 compute-0 sudo[37972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdnxxwnyfbbodmxecpretxopepjnrnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013479.523617-975-201719722199285/AnsiballZ_dnf.py'
Dec 06 09:31:19 compute-0 sudo[37972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:20 compute-0 python3.9[37974]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:31:21 compute-0 sudo[37972]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:23 compute-0 sudo[38125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsvaaepajqjlajjfgyqligycrobbaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013482.8037217-999-2868432979359/AnsiballZ_file.py'
Dec 06 09:31:23 compute-0 sudo[38125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:23 compute-0 python3.9[38127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:23 compute-0 sudo[38125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:23 compute-0 sudo[38277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppowmcanzlsnsfrdnaaneinmylfhqifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013483.5994058-1023-96510026555493/AnsiballZ_stat.py'
Dec 06 09:31:23 compute-0 sudo[38277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:24 compute-0 python3.9[38279]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:24 compute-0 sudo[38277]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:24 compute-0 sudo[38400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umvsunxmfzjcqvnecrxdcjpzgtmfuhah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013483.5994058-1023-96510026555493/AnsiballZ_copy.py'
Dec 06 09:31:24 compute-0 sudo[38400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:24 compute-0 python3.9[38402]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013483.5994058-1023-96510026555493/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:24 compute-0 sudo[38400]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:25 compute-0 sudo[38552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savsjjhifyslxknmlvuevnrptmmnpdnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013485.0247097-1068-17251975788778/AnsiballZ_systemd.py'
Dec 06 09:31:25 compute-0 sudo[38552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:25 compute-0 python3.9[38554]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:31:26 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 06 09:31:26 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 06 09:31:26 compute-0 kernel: Bridge firewalling registered
Dec 06 09:31:26 compute-0 systemd-modules-load[38558]: Inserted module 'br_netfilter'
Dec 06 09:31:26 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 06 09:31:26 compute-0 sudo[38552]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:26 compute-0 sudo[38711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjfshyykawlsxfhpiiaprgkgmwfavnhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013486.398648-1092-31399700733310/AnsiballZ_stat.py'
Dec 06 09:31:26 compute-0 sudo[38711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:26 compute-0 python3.9[38713]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:26 compute-0 sudo[38711]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:27 compute-0 sudo[38834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rytndniufnfsmlkudnpyyemyilsoggkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013486.398648-1092-31399700733310/AnsiballZ_copy.py'
Dec 06 09:31:27 compute-0 sudo[38834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:27 compute-0 python3.9[38836]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013486.398648-1092-31399700733310/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:27 compute-0 sudo[38834]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:28 compute-0 sudo[38986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okkvhiyssiqkdmjpybqhgrlnaxaazamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013488.5037255-1146-177430980335438/AnsiballZ_dnf.py'
Dec 06 09:31:28 compute-0 sudo[38986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:29 compute-0 python3.9[38988]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:31:32 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:31:32 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:31:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:31:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:31:33 compute-0 systemd[1]: Reloading.
Dec 06 09:31:33 compute-0 systemd-rc-local-generator[39052]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:31:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:31:33 compute-0 sudo[38986]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:37 compute-0 python3.9[42349]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:31:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:31:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:31:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.071s CPU time.
Dec 06 09:31:37 compute-0 systemd[1]: run-r845a5114ca674be5a2bfda5f0e15afc2.service: Deactivated successfully.
Dec 06 09:31:38 compute-0 python3.9[42854]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 09:31:38 compute-0 python3.9[43004]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:31:39 compute-0 sudo[43154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xciyklpvsuznwwicdzccgttrsylcckdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013499.462721-1263-65814550758772/AnsiballZ_command.py'
Dec 06 09:31:39 compute-0 sudo[43154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:40 compute-0 python3.9[43156]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:40 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 09:31:40 compute-0 systemd[1]: Starting Authorization Manager...
Dec 06 09:31:40 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 09:31:40 compute-0 polkitd[43373]: Started polkitd version 0.117
Dec 06 09:31:40 compute-0 polkitd[43373]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 09:31:40 compute-0 polkitd[43373]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 09:31:40 compute-0 polkitd[43373]: Finished loading, compiling and executing 2 rules
Dec 06 09:31:40 compute-0 systemd[1]: Started Authorization Manager.
Dec 06 09:31:40 compute-0 polkitd[43373]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 06 09:31:40 compute-0 sudo[43154]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:41 compute-0 sudo[43541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rglnrkevjuhhjgzsblsglmnaoantskbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013501.1883845-1290-79060185231765/AnsiballZ_systemd.py'
Dec 06 09:31:41 compute-0 sudo[43541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:41 compute-0 python3.9[43543]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:31:41 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 09:31:42 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 06 09:31:42 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 09:31:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 09:31:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 09:31:42 compute-0 sudo[43541]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:43 compute-0 python3.9[43705]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 09:31:46 compute-0 sudo[43855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqvhuazdguiksudbyqgpgdprlteiclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013506.190816-1461-275422292969605/AnsiballZ_systemd.py'
Dec 06 09:31:46 compute-0 sudo[43855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:46 compute-0 python3.9[43857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:31:46 compute-0 systemd[1]: Reloading.
Dec 06 09:31:46 compute-0 systemd-rc-local-generator[43888]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:31:47 compute-0 sudo[43855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:47 compute-0 sudo[44044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsigdpgfrpmrhgmqldfrrodcmzlbrqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013507.2930963-1461-207508485519898/AnsiballZ_systemd.py'
Dec 06 09:31:47 compute-0 sudo[44044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:47 compute-0 python3.9[44046]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:31:47 compute-0 systemd[1]: Reloading.
Dec 06 09:31:48 compute-0 systemd-rc-local-generator[44076]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:31:48 compute-0 sudo[44044]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:48 compute-0 sudo[44233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udiwobdrzwkcpmgitunnpvrcgjcciiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013508.6424587-1509-129689533791668/AnsiballZ_command.py'
Dec 06 09:31:48 compute-0 sudo[44233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:49 compute-0 python3.9[44235]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:49 compute-0 sudo[44233]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:49 compute-0 sudo[44386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikyfospnucjfbzouqcscrvhsexcnaof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013509.461339-1533-51514146606996/AnsiballZ_command.py'
Dec 06 09:31:49 compute-0 sudo[44386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:50 compute-0 python3.9[44388]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:50 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 06 09:31:50 compute-0 sudo[44386]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:50 compute-0 sudo[44539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxobwgeuygakckqwhuphlguccdaplawi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013510.3446982-1557-68911726252618/AnsiballZ_command.py'
Dec 06 09:31:50 compute-0 sudo[44539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:50 compute-0 python3.9[44541]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:52 compute-0 sudo[44539]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:52 compute-0 sudo[44701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkzplsjhtqpunthmlzplidbweihypngl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013512.6022494-1581-243336761458512/AnsiballZ_command.py'
Dec 06 09:31:52 compute-0 sudo[44701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:53 compute-0 python3.9[44703]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:53 compute-0 sudo[44701]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:53 compute-0 sudo[44854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anvdqiwtlblzuzurzcuokfmauydccxzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013513.408613-1605-159521263879698/AnsiballZ_systemd.py'
Dec 06 09:31:53 compute-0 sudo[44854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:54 compute-0 python3.9[44856]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:31:54 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 09:31:54 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 06 09:31:54 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 06 09:31:54 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 06 09:31:54 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 09:31:54 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 06 09:31:54 compute-0 sudo[44854]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:54 compute-0 sshd-session[31228]: Connection closed by 192.168.122.30 port 55352
Dec 06 09:31:54 compute-0 sshd-session[31225]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:31:54 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 06 09:31:54 compute-0 systemd[1]: session-9.scope: Consumed 2min 22.195s CPU time.
Dec 06 09:31:54 compute-0 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Dec 06 09:31:54 compute-0 systemd-logind[795]: Removed session 9.
Dec 06 09:32:00 compute-0 sshd-session[44886]: Accepted publickey for zuul from 192.168.122.30 port 40010 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:32:00 compute-0 systemd-logind[795]: New session 10 of user zuul.
Dec 06 09:32:00 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 06 09:32:00 compute-0 sshd-session[44886]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:32:01 compute-0 python3.9[45039]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:32:02 compute-0 sudo[45193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexufevfliimrekyuvvowmhjuguxffvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013522.14145-68-63039528809559/AnsiballZ_getent.py'
Dec 06 09:32:02 compute-0 sudo[45193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:02 compute-0 python3.9[45195]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 09:32:02 compute-0 sudo[45193]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:03 compute-0 sudo[45346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exklpgwdyrqpgogdxegrtygtieelsqfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013523.187035-92-116053934504048/AnsiballZ_group.py'
Dec 06 09:32:03 compute-0 sudo[45346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:03 compute-0 python3.9[45348]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:32:03 compute-0 groupadd[45349]: group added to /etc/group: name=openvswitch, GID=42476
Dec 06 09:32:03 compute-0 groupadd[45349]: group added to /etc/gshadow: name=openvswitch
Dec 06 09:32:03 compute-0 groupadd[45349]: new group: name=openvswitch, GID=42476
Dec 06 09:32:03 compute-0 sudo[45346]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:04 compute-0 sudo[45504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcrysxcvexiclrwpvagdojdyekbehzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013524.3366444-116-104451996271221/AnsiballZ_user.py'
Dec 06 09:32:04 compute-0 sudo[45504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:05 compute-0 python3.9[45506]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:32:05 compute-0 useradd[45508]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 09:32:05 compute-0 useradd[45508]: add 'openvswitch' to group 'hugetlbfs'
Dec 06 09:32:05 compute-0 useradd[45508]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 06 09:32:05 compute-0 sudo[45504]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:06 compute-0 sudo[45664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsqakccgughuehgmaxhqvfjrotdwotgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013525.6104589-146-212184907818492/AnsiballZ_setup.py'
Dec 06 09:32:06 compute-0 sudo[45664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:06 compute-0 python3.9[45666]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:32:06 compute-0 sudo[45664]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:06 compute-0 sudo[45748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-catoeihcysmmjoxkwzaqstrtnslbgsnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013525.6104589-146-212184907818492/AnsiballZ_dnf.py'
Dec 06 09:32:06 compute-0 sudo[45748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:07 compute-0 python3.9[45750]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:32:09 compute-0 sudo[45748]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:13 compute-0 sudo[45911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoqorzbjeavkcikyuzlnlfuaolxktman ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013533.5823264-188-82623077549557/AnsiballZ_dnf.py'
Dec 06 09:32:13 compute-0 sudo[45911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:14 compute-0 python3.9[45913]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:32:24 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:32:24 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:32:24 compute-0 groupadd[45936]: group added to /etc/group: name=unbound, GID=993
Dec 06 09:32:24 compute-0 groupadd[45936]: group added to /etc/gshadow: name=unbound
Dec 06 09:32:25 compute-0 groupadd[45936]: new group: name=unbound, GID=993
Dec 06 09:32:25 compute-0 useradd[45943]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 06 09:32:25 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 06 09:32:25 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 06 09:32:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:32:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:32:26 compute-0 systemd[1]: Reloading.
Dec 06 09:32:26 compute-0 systemd-rc-local-generator[46440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:26 compute-0 systemd-sysv-generator[46444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:27 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:32:27 compute-0 sudo[45911]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:32:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:32:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.009s CPU time.
Dec 06 09:32:27 compute-0 systemd[1]: run-rf42192781ac443f9a0ad5a9955f0232d.service: Deactivated successfully.
Dec 06 09:32:32 compute-0 sudo[47010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hexfebvtykledsjrhcxdjuzyueloqivz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013551.9441664-212-71499857530906/AnsiballZ_systemd.py'
Dec 06 09:32:32 compute-0 sudo[47010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:32 compute-0 python3.9[47012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:32:34 compute-0 systemd[1]: Reloading.
Dec 06 09:32:34 compute-0 systemd-rc-local-generator[47042]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:34 compute-0 systemd-sysv-generator[47045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:34 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 06 09:32:34 compute-0 chown[47053]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 06 09:32:34 compute-0 ovs-ctl[47058]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 06 09:32:34 compute-0 ovs-ctl[47058]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 06 09:32:34 compute-0 ovs-ctl[47058]: Starting ovsdb-server [  OK  ]
Dec 06 09:32:34 compute-0 ovs-vsctl[47107]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 06 09:32:34 compute-0 ovs-vsctl[47123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d39b5be8-d4cf-41c7-9a64-1ee03801f4e1\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 06 09:32:34 compute-0 ovs-ctl[47058]: Configuring Open vSwitch system IDs [  OK  ]
Dec 06 09:32:34 compute-0 ovs-vsctl[47132]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 06 09:32:34 compute-0 ovs-ctl[47058]: Enabling remote OVSDB managers [  OK  ]
Dec 06 09:32:34 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 06 09:32:34 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 06 09:32:34 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 06 09:32:34 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 06 09:32:34 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 06 09:32:34 compute-0 ovs-ctl[47178]: Inserting openvswitch module [  OK  ]
Dec 06 09:32:34 compute-0 ovs-ctl[47147]: Starting ovs-vswitchd [  OK  ]
Dec 06 09:32:35 compute-0 ovs-ctl[47147]: Enabling remote OVSDB managers [  OK  ]
Dec 06 09:32:35 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 06 09:32:35 compute-0 ovs-vsctl[47196]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 06 09:32:35 compute-0 systemd[1]: Starting Open vSwitch...
Dec 06 09:32:35 compute-0 systemd[1]: Finished Open vSwitch.
Dec 06 09:32:35 compute-0 sudo[47010]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:36 compute-0 python3.9[47347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:32:36 compute-0 sudo[47497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecklynhdkatujgtqbpgenihmjlihkwew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013556.4291258-266-154262262953060/AnsiballZ_sefcontext.py'
Dec 06 09:32:36 compute-0 sudo[47497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:37 compute-0 python3.9[47499]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 09:32:38 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:32:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:32:38 compute-0 sudo[47497]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:39 compute-0 python3.9[47654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:32:40 compute-0 sudo[47810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfkqenaouvbttaquhpvrruumweajvgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013560.3776221-320-167205509393569/AnsiballZ_dnf.py'
Dec 06 09:32:40 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 06 09:32:40 compute-0 sudo[47810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:40 compute-0 python3.9[47812]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:32:42 compute-0 sudo[47810]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:43 compute-0 sudo[47963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfrtqopwhirnuxmiifoqrcqznccbnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013562.6593025-344-605968594707/AnsiballZ_command.py'
Dec 06 09:32:43 compute-0 sudo[47963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:43 compute-0 python3.9[47965]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:32:44 compute-0 sudo[47963]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:45 compute-0 sudo[48250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwznkwphecqkeypczqyfyraiundlbjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013564.5636609-368-48856763591559/AnsiballZ_file.py'
Dec 06 09:32:45 compute-0 sudo[48250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:45 compute-0 python3.9[48252]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:32:45 compute-0 sudo[48250]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:46 compute-0 python3.9[48402]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:32:47 compute-0 sudo[48554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjzhcgsoppfvkmelbjwwiyvjyzsdvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013566.8108342-416-166761771005957/AnsiballZ_dnf.py'
Dec 06 09:32:47 compute-0 sudo[48554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:47 compute-0 python3.9[48556]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:32:49 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:32:49 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:32:49 compute-0 systemd[1]: Reloading.
Dec 06 09:32:49 compute-0 systemd-sysv-generator[48598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:49 compute-0 systemd-rc-local-generator[48594]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:32:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:32:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:32:49 compute-0 systemd[1]: run-re26409a8f10a4530981ee1472405070e.service: Deactivated successfully.
Dec 06 09:32:49 compute-0 sudo[48554]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:51 compute-0 sudo[48871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axadapqwyutnhiwfeokcasjmnhovkejc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013570.9114754-440-86695770010805/AnsiballZ_systemd.py'
Dec 06 09:32:51 compute-0 sudo[48871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:51 compute-0 python3.9[48873]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:32:51 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 06 09:32:51 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 06 09:32:51 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 06 09:32:51 compute-0 systemd[1]: Stopping Network Manager...
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5747] caught SIGTERM, shutting down normally.
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5770] dhcp4 (eth0): canceled DHCP transaction
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5771] dhcp4 (eth0): state changed no lease
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5774] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 09:32:51 compute-0 NetworkManager[7201]: <info>  [1765013571.5859] exiting (success)
Dec 06 09:32:51 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:32:51 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 06 09:32:51 compute-0 systemd[1]: Stopped Network Manager.
Dec 06 09:32:51 compute-0 systemd[1]: NetworkManager.service: Consumed 10.244s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Dec 06 09:32:51 compute-0 systemd[1]: Starting Network Manager...
Dec 06 09:32:51 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.6489] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:eb1a7567-b576-49d7-a613-e357bf119324)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.6492] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.6569] manager[0x55f1b750e090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 09:32:51 compute-0 systemd[1]: Starting Hostname Service...
Dec 06 09:32:51 compute-0 systemd[1]: Started Hostname Service.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7556] hostname: hostname: using hostnamed
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7560] hostname: static hostname changed from (none) to "compute-0"
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7567] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7572] manager[0x55f1b750e090]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7572] manager[0x55f1b750e090]: rfkill: WWAN hardware radio set enabled
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7598] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7608] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7609] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7610] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7610] manager: Networking is enabled by state file
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7613] settings: Loaded settings plugin: keyfile (internal)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7617] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7650] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7663] dhcp: init: Using DHCP client 'internal'
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7666] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7672] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7678] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7689] device (lo): Activation: starting connection 'lo' (40483b14-1904-462e-975f-deec93e74606)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7698] device (eth0): carrier: link connected
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7704] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7711] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7712] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7720] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7728] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7737] device (eth1): carrier: link connected
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7742] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7749] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73) (indicated)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7750] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7756] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7766] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec 06 09:32:51 compute-0 systemd[1]: Started Network Manager.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7779] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7815] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7825] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7831] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7836] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7844] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7850] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7857] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7866] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7893] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7898] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7909] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7924] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7957] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.7971] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8046] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8048] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8053] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8062] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8068] device (lo): Activation: successful, device activated.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8076] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8079] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8082] device (eth1): Activation: successful, device activated.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8118] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8119] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8122] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8125] device (eth0): Activation: successful, device activated.
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8131] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 09:32:51 compute-0 NetworkManager[48882]: <info>  [1765013571.8134] manager: startup complete
Dec 06 09:32:51 compute-0 sudo[48871]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:51 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 06 09:32:52 compute-0 sudo[49097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwswpyyochyvzrivynbvfgyplnckeip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013572.1387925-464-129862493615118/AnsiballZ_dnf.py'
Dec 06 09:32:52 compute-0 sudo[49097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:52 compute-0 python3.9[49099]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:32:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:32:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:32:57 compute-0 systemd[1]: Reloading.
Dec 06 09:32:57 compute-0 systemd-rc-local-generator[49149]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:57 compute-0 systemd-sysv-generator[49154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:32:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:32:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:32:58 compute-0 systemd[1]: run-r16c98f1ff1654240bf89332ff9e67ae7.service: Deactivated successfully.
Dec 06 09:32:58 compute-0 sudo[49097]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:01 compute-0 sudo[49555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvevtnffuodmnblukmrckjqsetmbqrdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013580.9129028-500-227980970404398/AnsiballZ_stat.py'
Dec 06 09:33:01 compute-0 sudo[49555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:01 compute-0 python3.9[49557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:33:01 compute-0 sudo[49555]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:01 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:33:02 compute-0 sudo[49707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlbgbipcehheucbftwnbtlamhzccccyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013581.7670352-527-143320455376076/AnsiballZ_ini_file.py'
Dec 06 09:33:02 compute-0 sudo[49707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:02 compute-0 python3.9[49709]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:02 compute-0 sudo[49707]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:03 compute-0 sudo[49861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwztjzsrqfffxehzpznexfjeqyqyqexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013582.8261452-557-149161145551180/AnsiballZ_ini_file.py'
Dec 06 09:33:03 compute-0 sudo[49861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:03 compute-0 python3.9[49863]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:03 compute-0 sudo[49861]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:03 compute-0 sudo[50013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tscdsxzrprxoevdvrbtzjliwgcznnyec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013583.5367248-557-15640196573595/AnsiballZ_ini_file.py'
Dec 06 09:33:03 compute-0 sudo[50013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:04 compute-0 python3.9[50015]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:04 compute-0 sudo[50013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:04 compute-0 sudo[50165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkmfvjondkngbeuzbhepaijcaaufijl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013584.3525481-602-33139421670858/AnsiballZ_ini_file.py'
Dec 06 09:33:04 compute-0 sudo[50165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:04 compute-0 python3.9[50167]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:04 compute-0 sudo[50165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:05 compute-0 sudo[50317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amjpozpoztqyxrnrodeyzzwperqlplrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013585.0281608-602-251766704270999/AnsiballZ_ini_file.py'
Dec 06 09:33:05 compute-0 sudo[50317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:05 compute-0 python3.9[50319]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:05 compute-0 sudo[50317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:06 compute-0 sudo[50469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzhukijnrfwrtmlbgtndijnkagejnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013585.9093392-647-226015058441756/AnsiballZ_stat.py'
Dec 06 09:33:06 compute-0 sudo[50469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:06 compute-0 python3.9[50471]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:06 compute-0 sudo[50469]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:07 compute-0 sudo[50592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qklaedvvnohkjyguvcpvagqbodcyfkxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013585.9093392-647-226015058441756/AnsiballZ_copy.py'
Dec 06 09:33:07 compute-0 sudo[50592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:07 compute-0 python3.9[50594]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013585.9093392-647-226015058441756/.source _original_basename=.vs_ipouj follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:07 compute-0 sudo[50592]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:07 compute-0 sudo[50744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgvyrzgakoofzicijsltypzwpwqcpla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013587.44736-692-19784827994792/AnsiballZ_file.py'
Dec 06 09:33:07 compute-0 sudo[50744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:08 compute-0 python3.9[50746]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:08 compute-0 sudo[50744]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:08 compute-0 sudo[50896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bywbjwdnknifzbzqtdebjteyxeiipdnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013588.235401-716-22137079021029/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 06 09:33:08 compute-0 sudo[50896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:08 compute-0 python3.9[50898]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 06 09:33:08 compute-0 sudo[50896]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:09 compute-0 sudo[51048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iidwtzfiozqvcgqqnunteuxatyoboaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013589.2480195-743-96613925741550/AnsiballZ_file.py'
Dec 06 09:33:09 compute-0 sudo[51048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:09 compute-0 python3.9[51050]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:09 compute-0 sudo[51048]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:10 compute-0 sudo[51200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilwrsrmcxlynneqyabgzmzscnhyototy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013590.2759132-773-84438183000257/AnsiballZ_stat.py'
Dec 06 09:33:10 compute-0 sudo[51200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:10 compute-0 sudo[51200]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:11 compute-0 sudo[51323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbwojkuhdxqvukzapphzlppoaslznpbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013590.2759132-773-84438183000257/AnsiballZ_copy.py'
Dec 06 09:33:11 compute-0 sudo[51323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:11 compute-0 sudo[51323]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:12 compute-0 sudo[51475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyibjdwffbuqvcqjspqouwsdocrdrkdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013591.6033733-818-245118485148130/AnsiballZ_slurp.py'
Dec 06 09:33:12 compute-0 sudo[51475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:12 compute-0 python3.9[51477]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 06 09:33:12 compute-0 sudo[51475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:13 compute-0 sudo[51650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddtqytvvdnvrbnhkrzobnzoigyshvxq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013592.6511362-845-179158062912121/async_wrapper.py j224323356287 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013592.6511362-845-179158062912121/AnsiballZ_edpm_os_net_config.py _'
Dec 06 09:33:13 compute-0 sudo[51650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:13 compute-0 ansible-async_wrapper.py[51652]: Invoked with j224323356287 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013592.6511362-845-179158062912121/AnsiballZ_edpm_os_net_config.py _
Dec 06 09:33:13 compute-0 ansible-async_wrapper.py[51655]: Starting module and watcher
Dec 06 09:33:13 compute-0 ansible-async_wrapper.py[51655]: Start watching 51656 (300)
Dec 06 09:33:13 compute-0 ansible-async_wrapper.py[51656]: Start module (51656)
Dec 06 09:33:13 compute-0 ansible-async_wrapper.py[51652]: Return async_wrapper task started.
Dec 06 09:33:13 compute-0 sudo[51650]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:13 compute-0 python3.9[51657]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 06 09:33:14 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 06 09:33:14 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 06 09:33:14 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 06 09:33:14 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 06 09:33:14 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.1358] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.1394] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2233] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2237] audit: op="connection-add" uuid="d06b796e-eff3-47e6-9580-60f48bdc3b4a" name="br-ex-br" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2259] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2261] audit: op="connection-add" uuid="f3ea87c2-8306-4b8f-9729-5c82dc71ef5e" name="br-ex-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2279] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2281] audit: op="connection-add" uuid="11b7839f-0ef4-4c44-998e-24bd4d572348" name="eth1-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2298] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2300] audit: op="connection-add" uuid="fefc03af-e59b-4845-a72f-9adee1229bca" name="vlan20-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2317] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2319] audit: op="connection-add" uuid="04ade02e-3073-47ab-a59b-268630689f01" name="vlan21-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2335] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2338] audit: op="connection-add" uuid="f7cf4ddb-e9d9-4376-a71b-b818cd6520cf" name="vlan22-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2353] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2356] audit: op="connection-add" uuid="9e3fc56d-219f-411a-be95-7518d29c56f3" name="vlan23-port" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2382] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2403] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2406] audit: op="connection-add" uuid="effa1e3c-ea27-4e29-92bf-336c557377b9" name="br-ex-if" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2453] audit: op="connection-update" uuid="6151fa65-6cef-549f-91ba-9f68f8a2cb73" name="ci-private-network" args="ovs-interface.type,ipv6.dns,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,connection.port-type,connection.slave-type,connection.controller,connection.master,connection.timestamp,ovs-external-ids.data" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2475] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2477] audit: op="connection-add" uuid="7bc011b9-8b26-4ceb-8792-16af1c51b18b" name="vlan20-if" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2497] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2500] audit: op="connection-add" uuid="f40c8203-a736-44ba-b87c-e14feb441d1e" name="vlan21-if" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2521] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2524] audit: op="connection-add" uuid="b558af2b-19fb-4a8b-a432-c92141b38d13" name="vlan22-if" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2558] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2562] audit: op="connection-add" uuid="eccfa609-5d3a-480b-b546-ce9d96de7c68" name="vlan23-if" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2581] audit: op="connection-delete" uuid="801d2662-229c-3ec2-ab7b-8017b4489ad7" name="Wired connection 1" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2601] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2617] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2623] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d06b796e-eff3-47e6-9580-60f48bdc3b4a)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2625] audit: op="connection-activate" uuid="d06b796e-eff3-47e6-9580-60f48bdc3b4a" name="br-ex-br" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2628] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2640] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2646] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f3ea87c2-8306-4b8f-9729-5c82dc71ef5e)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2649] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2658] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2666] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (11b7839f-0ef4-4c44-998e-24bd4d572348)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2669] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2682] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2692] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fefc03af-e59b-4845-a72f-9adee1229bca)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2696] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2710] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2717] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (04ade02e-3073-47ab-a59b-268630689f01)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2720] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2731] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2738] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f7cf4ddb-e9d9-4376-a71b-b818cd6520cf)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2741] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2752] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2759] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9e3fc56d-219f-411a-be95-7518d29c56f3)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2760] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2765] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2767] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2778] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2788] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2795] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (effa1e3c-ea27-4e29-92bf-336c557377b9)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2796] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2802] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2805] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2807] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2809] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2827] device (eth1): disconnecting for new activation request.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2828] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2834] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2837] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2839] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2843] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2851] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2860] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7bc011b9-8b26-4ceb-8792-16af1c51b18b)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2861] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2866] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2870] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2872] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2877] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2885] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2893] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f40c8203-a736-44ba-b87c-e14feb441d1e)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2894] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2900] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2902] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2905] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2910] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2918] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2926] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b558af2b-19fb-4a8b-a432-c92141b38d13)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2927] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2931] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2935] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2937] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2943] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2950] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2959] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (eccfa609-5d3a-480b-b546-ce9d96de7c68)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2961] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2968] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2972] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2975] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.2978] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3001] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3005] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3011] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3014] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3024] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3030] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3037] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3042] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3045] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3053] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3061] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 systemd-udevd[51663]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3066] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3072] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3080] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 kernel: Timeout policy base is empty
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3084] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3088] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3090] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3096] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3101] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3105] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3106] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3112] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3118] dhcp4 (eth0): canceled DHCP transaction
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3118] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3119] dhcp4 (eth0): state changed no lease
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3120] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3132] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3138] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51658 uid=0 result="fail" reason="Device is not activated"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3142] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3152] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3192] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 06 09:33:16 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3210] device (eth1): disconnecting for new activation request.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3212] audit: op="connection-activate" uuid="6151fa65-6cef-549f-91ba-9f68f8a2cb73" name="ci-private-network" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3212] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3354] device (eth1): Activation: starting connection 'ci-private-network' (6151fa65-6cef-549f-91ba-9f68f8a2cb73)
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3359] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3365] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3384] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3387] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3395] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3399] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3403] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3405] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3406] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3407] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3408] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3410] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3411] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3413] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3420] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3427] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3430] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3436] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3439] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3443] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3446] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3450] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3454] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3459] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3462] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3466] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3471] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3474] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 kernel: br-ex: entered promiscuous mode
Dec 06 09:33:16 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3524] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3527] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3532] device (eth1): Activation: successful, device activated.
Dec 06 09:33:16 compute-0 kernel: vlan22: entered promiscuous mode
Dec 06 09:33:16 compute-0 systemd-udevd[51662]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3635] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3645] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 kernel: vlan20: entered promiscuous mode
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3665] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3667] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3673] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3716] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3723] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3742] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3743] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3748] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 kernel: vlan21: entered promiscuous mode
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3794] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3802] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3826] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3827] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3832] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 kernel: vlan23: entered promiscuous mode
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3876] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3884] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3958] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3965] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3998] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.3999] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.4005] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.4011] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.4013] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 09:33:16 compute-0 NetworkManager[48882]: <info>  [1765013596.4018] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 09:33:17 compute-0 NetworkManager[48882]: <info>  [1765013597.0530] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Dec 06 09:33:17 compute-0 sudo[52013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuyspnxbnlkbomzgibeiedoxrqeralhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013596.8479595-845-54420428717470/AnsiballZ_async_status.py'
Dec 06 09:33:17 compute-0 sudo[52013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:17 compute-0 NetworkManager[48882]: <info>  [1765013597.5150] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec 06 09:33:17 compute-0 python3.9[52015]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=status _async_dir=/root/.ansible_async
Dec 06 09:33:17 compute-0 sudo[52013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:17 compute-0 NetworkManager[48882]: <info>  [1765013597.8036] checkpoint[0x55f1b74e3950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 06 09:33:17 compute-0 NetworkManager[48882]: <info>  [1765013597.8041] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.2393] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.2407] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.5577] audit: op="networking-control" arg="global-dns-configuration" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.5620] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.5661] audit: op="networking-control" arg="global-dns-configuration" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.5698] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 ansible-async_wrapper.py[51655]: 51656 still running (300)
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.8092] checkpoint[0x55f1b74e3a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 06 09:33:18 compute-0 NetworkManager[48882]: <info>  [1765013598.8096] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51658 uid=0 result="success"
Dec 06 09:33:18 compute-0 ansible-async_wrapper.py[51656]: Module complete (51656)
Dec 06 09:33:20 compute-0 sudo[52119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiaewslajcqjfoiygwqixzmcyjtbcnsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013596.8479595-845-54420428717470/AnsiballZ_async_status.py'
Dec 06 09:33:20 compute-0 sudo[52119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:21 compute-0 python3.9[52121]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=status _async_dir=/root/.ansible_async
Dec 06 09:33:21 compute-0 sudo[52119]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:21 compute-0 sudo[52219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtziphejmimggzqbwgdxrlhssdflqhbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013596.8479595-845-54420428717470/AnsiballZ_async_status.py'
Dec 06 09:33:21 compute-0 sudo[52219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:21 compute-0 python3.9[52221]: ansible-ansible.legacy.async_status Invoked with jid=j224323356287.51652 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 09:33:21 compute-0 sudo[52219]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:21 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 09:33:22 compute-0 sudo[52373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrbgqebguhucxsszteviehfvebcgsmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013602.007697-926-194097857807042/AnsiballZ_stat.py'
Dec 06 09:33:22 compute-0 sudo[52373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:22 compute-0 python3.9[52375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:22 compute-0 sudo[52373]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:22 compute-0 sudo[52496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjghjdybxswagzsxnqyqxopjzloxums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013602.007697-926-194097857807042/AnsiballZ_copy.py'
Dec 06 09:33:22 compute-0 sudo[52496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:23 compute-0 python3.9[52498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013602.007697-926-194097857807042/.source.returncode _original_basename=.9o4iarjp follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:23 compute-0 sudo[52496]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:23 compute-0 ansible-async_wrapper.py[51655]: Done in kid B.
Dec 06 09:33:23 compute-0 sudo[52648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgndnkfoepyvoupjypqpfmaawyqeadiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013603.6508775-974-95559424194779/AnsiballZ_stat.py'
Dec 06 09:33:23 compute-0 sudo[52648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:24 compute-0 python3.9[52650]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:24 compute-0 sudo[52648]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:24 compute-0 sudo[52772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-untbkjftvfczrrzjaxkyqfexopgaiiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013603.6508775-974-95559424194779/AnsiballZ_copy.py'
Dec 06 09:33:24 compute-0 sudo[52772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:24 compute-0 python3.9[52774]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013603.6508775-974-95559424194779/.source.cfg _original_basename=.rf6_845k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:24 compute-0 sudo[52772]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:25 compute-0 sudo[52924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvzridiojlqaczlkdrppnvfbnljkkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013605.213339-1019-230425116134433/AnsiballZ_systemd.py'
Dec 06 09:33:25 compute-0 sudo[52924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:25 compute-0 python3.9[52926]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:33:25 compute-0 systemd[1]: Reloading Network Manager...
Dec 06 09:33:25 compute-0 NetworkManager[48882]: <info>  [1765013605.9427] audit: op="reload" arg="0" pid=52930 uid=0 result="success"
Dec 06 09:33:25 compute-0 NetworkManager[48882]: <info>  [1765013605.9438] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 06 09:33:25 compute-0 systemd[1]: Reloaded Network Manager.
Dec 06 09:33:25 compute-0 sudo[52924]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:26 compute-0 sshd-session[44889]: Connection closed by 192.168.122.30 port 40010
Dec 06 09:33:26 compute-0 sshd-session[44886]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:33:26 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 06 09:33:26 compute-0 systemd[1]: session-10.scope: Consumed 53.797s CPU time.
Dec 06 09:33:26 compute-0 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Dec 06 09:33:26 compute-0 systemd-logind[795]: Removed session 10.
Dec 06 09:33:32 compute-0 sshd-session[52961]: Accepted publickey for zuul from 192.168.122.30 port 38264 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:33:32 compute-0 systemd-logind[795]: New session 11 of user zuul.
Dec 06 09:33:32 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 06 09:33:32 compute-0 sshd-session[52961]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:33:33 compute-0 python3.9[53115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:34 compute-0 python3.9[53270]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:33:35 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 09:33:37 compute-0 python3.9[53465]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:37 compute-0 sshd-session[52964]: Connection closed by 192.168.122.30 port 38264
Dec 06 09:33:37 compute-0 sshd-session[52961]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:33:37 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 06 09:33:37 compute-0 systemd[1]: session-11.scope: Consumed 2.752s CPU time.
Dec 06 09:33:37 compute-0 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Dec 06 09:33:37 compute-0 systemd-logind[795]: Removed session 11.
Dec 06 09:33:43 compute-0 sshd-session[53493]: Accepted publickey for zuul from 192.168.122.30 port 33508 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:33:43 compute-0 systemd-logind[795]: New session 12 of user zuul.
Dec 06 09:33:43 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 06 09:33:43 compute-0 sshd-session[53493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:33:45 compute-0 python3.9[53646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:46 compute-0 python3.9[53800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:47 compute-0 sudo[53955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgkxigeqhzpiwzuaeopkhrqhsjsnocj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013626.6504688-80-17645393608748/AnsiballZ_setup.py'
Dec 06 09:33:47 compute-0 sudo[53955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:47 compute-0 python3.9[53957]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:33:47 compute-0 sudo[53955]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:48 compute-0 sudo[54039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rchvegeitxbflvgnolnqojdwwdwmwjgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013626.6504688-80-17645393608748/AnsiballZ_dnf.py'
Dec 06 09:33:48 compute-0 sudo[54039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:48 compute-0 python3.9[54041]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:33:49 compute-0 sudo[54039]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:50 compute-0 sudo[54192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhsesnuncslvsekrtypcjnduhujzghss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013629.9168382-116-183231151263813/AnsiballZ_setup.py'
Dec 06 09:33:50 compute-0 sudo[54192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:50 compute-0 python3.9[54194]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:33:50 compute-0 sudo[54192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:51 compute-0 sudo[54387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyiiilhgtwnvzauxftarqnoxwnjpybgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013631.367979-149-268283694284165/AnsiballZ_file.py'
Dec 06 09:33:51 compute-0 sudo[54387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:52 compute-0 python3.9[54389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:52 compute-0 sudo[54387]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:52 compute-0 sudo[54539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltzcodzyxmhbadsqzmlqacytcvdbsaty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013632.307334-173-8818684849957/AnsiballZ_command.py'
Dec 06 09:33:52 compute-0 sudo[54539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:53 compute-0 python3.9[54541]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:53 compute-0 podman[54542]: 2025-12-06 09:33:53.13376544 +0000 UTC m=+0.074602575 system refresh
Dec 06 09:33:53 compute-0 sudo[54539]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:54 compute-0 sudo[54702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kooespcguiqurkohdhpsznguwtyskali ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013633.5340986-197-257995235937897/AnsiballZ_stat.py'
Dec 06 09:33:54 compute-0 sudo[54702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:33:54 compute-0 python3.9[54704]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:54 compute-0 sudo[54702]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:54 compute-0 sudo[54825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnoplnqkwkofkdxlsggrhpzxckjteky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013633.5340986-197-257995235937897/AnsiballZ_copy.py'
Dec 06 09:33:54 compute-0 sudo[54825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:55 compute-0 python3.9[54827]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013633.5340986-197-257995235937897/.source.json follow=False _original_basename=podman_network_config.j2 checksum=03deeea959a9993f39215aad2a3d3f6b4484abaa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:55 compute-0 sudo[54825]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:55 compute-0 sudo[54977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covnrteypoyfwwkdypeznqghzqqytwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013635.2761316-242-228714727878413/AnsiballZ_stat.py'
Dec 06 09:33:55 compute-0 sudo[54977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:55 compute-0 python3.9[54979]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:55 compute-0 sudo[54977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:56 compute-0 sudo[55100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfawlvpvwmzvlpgbwzpufxiwdwlcyusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013635.2761316-242-228714727878413/AnsiballZ_copy.py'
Dec 06 09:33:56 compute-0 sudo[55100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:56 compute-0 python3.9[55102]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013635.2761316-242-228714727878413/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:56 compute-0 sudo[55100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:57 compute-0 sudo[55252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxsfowxasstavwprobblgrbbtdbjdtmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013636.8751771-290-2865214973386/AnsiballZ_ini_file.py'
Dec 06 09:33:57 compute-0 sudo[55252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:57 compute-0 python3.9[55254]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:57 compute-0 sudo[55252]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:58 compute-0 sudo[55404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yndpgkrfxrrdmhbhyqkfcgnxuwfyvbsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013637.805676-290-261812433476925/AnsiballZ_ini_file.py'
Dec 06 09:33:58 compute-0 sudo[55404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:58 compute-0 python3.9[55406]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:58 compute-0 sudo[55404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:58 compute-0 sudo[55556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crmpycfstdzkavxqdseruqbxfpxqpptm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013638.5267239-290-104561817461010/AnsiballZ_ini_file.py'
Dec 06 09:33:58 compute-0 sudo[55556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:58 compute-0 python3.9[55558]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:58 compute-0 sudo[55556]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:59 compute-0 sudo[55708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezhjzybjuwzdfbqlznvdczzbvtvfetts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013639.153575-290-75293252858043/AnsiballZ_ini_file.py'
Dec 06 09:33:59 compute-0 sudo[55708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:59 compute-0 python3.9[55710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:59 compute-0 sudo[55708]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:00 compute-0 sudo[55860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gymslzytcaiynbcemfajwwhillzjhqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013640.124978-383-158675068648593/AnsiballZ_dnf.py'
Dec 06 09:34:00 compute-0 sudo[55860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:00 compute-0 python3.9[55862]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:34:01 compute-0 sudo[55860]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:03 compute-0 sudo[56013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrsozzazdhjvzdpkejimabxtbnmchge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013642.9224916-416-202114240792911/AnsiballZ_setup.py'
Dec 06 09:34:03 compute-0 sudo[56013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:03 compute-0 python3.9[56015]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:34:03 compute-0 sudo[56013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:04 compute-0 sudo[56167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knsumqobadodhhjdtzzyqyehbgwdziqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013643.916436-440-30605176797803/AnsiballZ_stat.py'
Dec 06 09:34:04 compute-0 sudo[56167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:04 compute-0 python3.9[56169]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:34:04 compute-0 sudo[56167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:05 compute-0 sudo[56319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghulvfxbwajrdtkhjnfymgiigtecdkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013644.7792826-467-217940746610762/AnsiballZ_stat.py'
Dec 06 09:34:05 compute-0 sudo[56319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:05 compute-0 python3.9[56321]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:34:05 compute-0 sudo[56319]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:06 compute-0 sudo[56471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pivpkniidftkxpcphsbmdbgjzggphwts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013645.6823378-497-207488528715567/AnsiballZ_command.py'
Dec 06 09:34:06 compute-0 sudo[56471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:06 compute-0 python3.9[56473]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:34:06 compute-0 sudo[56471]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:07 compute-0 sudo[56624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnapewfwfkyaakirldkoqbnipfoucswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013646.5962975-527-35408707970869/AnsiballZ_service_facts.py'
Dec 06 09:34:07 compute-0 sudo[56624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:07 compute-0 python3.9[56626]: ansible-service_facts Invoked
Dec 06 09:34:07 compute-0 network[56643]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:34:07 compute-0 network[56644]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:34:07 compute-0 network[56645]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:34:10 compute-0 sudo[56624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:13 compute-0 sudo[56928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawiqqycsflovfuzhyvdqwxoejnjybsm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765013653.2483242-572-96224756335132/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765013653.2483242-572-96224756335132/args'
Dec 06 09:34:13 compute-0 sudo[56928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:13 compute-0 sudo[56928]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:14 compute-0 sudo[57095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udlhdxkevlxhvmnopnxwzpkmdcifagww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013654.3499293-605-153952353305503/AnsiballZ_dnf.py'
Dec 06 09:34:14 compute-0 sudo[57095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:14 compute-0 python3.9[57097]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:34:16 compute-0 sudo[57095]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:17 compute-0 sudo[57248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aldnehirjxfbkmxgichydrjrjjravalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013657.060576-644-52288236018588/AnsiballZ_package_facts.py'
Dec 06 09:34:17 compute-0 sudo[57248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:18 compute-0 python3.9[57250]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 09:34:18 compute-0 sudo[57248]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:19 compute-0 sudo[57400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmppieiirhtzziosencvpefvnnwxwulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013659.0654032-674-151548220009269/AnsiballZ_stat.py'
Dec 06 09:34:19 compute-0 sudo[57400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:19 compute-0 python3.9[57402]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:19 compute-0 sudo[57400]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:20 compute-0 sudo[57525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhwpgqlubbqnlhkagrkthcbyaesrrszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013659.0654032-674-151548220009269/AnsiballZ_copy.py'
Dec 06 09:34:20 compute-0 sudo[57525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:20 compute-0 python3.9[57527]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013659.0654032-674-151548220009269/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:20 compute-0 sudo[57525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:21 compute-0 sudo[57679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llmwcokrzlsosyvekedzdhurxnleetpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013660.697661-719-159048564894170/AnsiballZ_stat.py'
Dec 06 09:34:21 compute-0 sudo[57679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:21 compute-0 python3.9[57681]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:21 compute-0 sudo[57679]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:21 compute-0 sudo[57804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znezulogtzrpufdymjkjrlvqobiafltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013660.697661-719-159048564894170/AnsiballZ_copy.py'
Dec 06 09:34:21 compute-0 sudo[57804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:21 compute-0 python3.9[57806]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013660.697661-719-159048564894170/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:22 compute-0 sudo[57804]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:23 compute-0 sudo[57958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnslqhczykwivetyhirgajaomfcnrusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013663.1025434-782-32112426736924/AnsiballZ_lineinfile.py'
Dec 06 09:34:23 compute-0 sudo[57958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:23 compute-0 python3.9[57960]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:23 compute-0 sudo[57958]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:25 compute-0 sudo[58112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmlktickotvanhxluqmdbwtotjyntvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013664.942355-827-71450514725834/AnsiballZ_setup.py'
Dec 06 09:34:25 compute-0 sudo[58112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:25 compute-0 python3.9[58114]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:34:25 compute-0 sudo[58112]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:26 compute-0 sudo[58196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttpcjkfypyvqdvbkhvoexbarfjzwakfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013664.942355-827-71450514725834/AnsiballZ_systemd.py'
Dec 06 09:34:26 compute-0 sudo[58196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:26 compute-0 python3.9[58198]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:34:26 compute-0 sudo[58196]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:28 compute-0 sudo[58350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hemjugabajlqhvgaiwmxfyisozhozxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013667.6167357-875-66096355569756/AnsiballZ_setup.py'
Dec 06 09:34:28 compute-0 sudo[58350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:28 compute-0 python3.9[58352]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:34:28 compute-0 sudo[58350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:28 compute-0 sudo[58434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aatmhqmtedvpawpovcgdhmwveziitjxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013667.6167357-875-66096355569756/AnsiballZ_systemd.py'
Dec 06 09:34:28 compute-0 sudo[58434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:29 compute-0 python3.9[58436]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:34:29 compute-0 chronyd[778]: chronyd exiting
Dec 06 09:34:29 compute-0 systemd[1]: Stopping NTP client/server...
Dec 06 09:34:29 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 06 09:34:29 compute-0 systemd[1]: Stopped NTP client/server.
Dec 06 09:34:29 compute-0 systemd[1]: Starting NTP client/server...
Dec 06 09:34:29 compute-0 chronyd[58445]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 09:34:29 compute-0 chronyd[58445]: Frequency -26.315 +/- 0.691 ppm read from /var/lib/chrony/drift
Dec 06 09:34:29 compute-0 chronyd[58445]: Loaded seccomp filter (level 2)
Dec 06 09:34:29 compute-0 systemd[1]: Started NTP client/server.
Dec 06 09:34:29 compute-0 sudo[58434]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:30 compute-0 sshd-session[53496]: Connection closed by 192.168.122.30 port 33508
Dec 06 09:34:30 compute-0 sshd-session[53493]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:34:30 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 06 09:34:30 compute-0 systemd[1]: session-12.scope: Consumed 29.917s CPU time.
Dec 06 09:34:30 compute-0 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Dec 06 09:34:30 compute-0 systemd-logind[795]: Removed session 12.
Dec 06 09:34:36 compute-0 sshd-session[58471]: Accepted publickey for zuul from 192.168.122.30 port 59382 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:34:36 compute-0 systemd-logind[795]: New session 13 of user zuul.
Dec 06 09:34:36 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 06 09:34:36 compute-0 sshd-session[58471]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:34:36 compute-0 sudo[58624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkrdusehcuoobifxrniefgdqvwaqsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013676.2710402-26-235046196311739/AnsiballZ_file.py'
Dec 06 09:34:36 compute-0 sudo[58624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:36 compute-0 python3.9[58626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:36 compute-0 sudo[58624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:37 compute-0 sudo[58776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsstpvqmraivcpgdjrqsmuomxyinbyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013677.2623353-62-225085314951969/AnsiballZ_stat.py'
Dec 06 09:34:37 compute-0 sudo[58776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:38 compute-0 python3.9[58778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:38 compute-0 sudo[58776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:38 compute-0 sudo[58899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpfweojhbynrkkfbtzcltmnohpjwlrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013677.2623353-62-225085314951969/AnsiballZ_copy.py'
Dec 06 09:34:38 compute-0 sudo[58899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:38 compute-0 python3.9[58901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013677.2623353-62-225085314951969/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:38 compute-0 sudo[58899]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:39 compute-0 sshd-session[58474]: Connection closed by 192.168.122.30 port 59382
Dec 06 09:34:39 compute-0 sshd-session[58471]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:34:39 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 06 09:34:39 compute-0 systemd[1]: session-13.scope: Consumed 1.993s CPU time.
Dec 06 09:34:39 compute-0 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Dec 06 09:34:39 compute-0 systemd-logind[795]: Removed session 13.
Dec 06 09:34:45 compute-0 sshd-session[58926]: Accepted publickey for zuul from 192.168.122.30 port 36242 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:34:45 compute-0 systemd-logind[795]: New session 14 of user zuul.
Dec 06 09:34:45 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 06 09:34:45 compute-0 sshd-session[58926]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:34:47 compute-0 python3.9[59079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:34:48 compute-0 sudo[59233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmzutsxijxypnlgstugrjbaitawnwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013687.5451636-59-8003082955402/AnsiballZ_file.py'
Dec 06 09:34:48 compute-0 sudo[59233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:48 compute-0 python3.9[59235]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:48 compute-0 sudo[59233]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:49 compute-0 sudo[59409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zenumdaltdsruwgjvgbdgxbzjgobxieu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013688.4933412-83-79513902301521/AnsiballZ_stat.py'
Dec 06 09:34:49 compute-0 sudo[59409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:49 compute-0 python3.9[59411]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:49 compute-0 sudo[59409]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:49 compute-0 sudo[59532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqepsxolvzlmgzdkyyzllphjkrpgzrps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013688.4933412-83-79513902301521/AnsiballZ_copy.py'
Dec 06 09:34:49 compute-0 sudo[59532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:49 compute-0 python3.9[59534]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765013688.4933412-83-79513902301521/.source.json _original_basename=.m2p3csdt follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:50 compute-0 sudo[59532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:50 compute-0 sudo[59684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgdibstpzwqdddjzhuhczpxmvgyqtthq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013690.6576765-152-192348103060886/AnsiballZ_stat.py'
Dec 06 09:34:50 compute-0 sudo[59684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:51 compute-0 python3.9[59686]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:51 compute-0 sudo[59684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:51 compute-0 sudo[59807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzirhnjbqhcdlvajunakpctrfqmiwamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013690.6576765-152-192348103060886/AnsiballZ_copy.py'
Dec 06 09:34:51 compute-0 sudo[59807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:51 compute-0 python3.9[59809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013690.6576765-152-192348103060886/.source _original_basename=.l0tuqyj3 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:51 compute-0 sudo[59807]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:52 compute-0 sudo[59959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyjcdnoclwixfxjtcmwarvpjjihlznty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013692.0792189-200-137045214918066/AnsiballZ_file.py'
Dec 06 09:34:52 compute-0 sudo[59959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:52 compute-0 python3.9[59961]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:52 compute-0 sudo[59959]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:53 compute-0 sudo[60111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexgxxnvqxvyenakatzjtdfizdkkgxio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013693.0702667-224-239507521430327/AnsiballZ_stat.py'
Dec 06 09:34:53 compute-0 sudo[60111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:53 compute-0 python3.9[60113]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:53 compute-0 sudo[60111]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:54 compute-0 sudo[60234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewhxjnimjwmmslrtqitmcfusviovdxhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013693.0702667-224-239507521430327/AnsiballZ_copy.py'
Dec 06 09:34:54 compute-0 sudo[60234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:54 compute-0 python3.9[60236]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013693.0702667-224-239507521430327/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:54 compute-0 sudo[60234]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:54 compute-0 sudo[60386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mffcsjyhxevqwedbuititceuqywqkuep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013694.6037858-224-214058468494415/AnsiballZ_stat.py'
Dec 06 09:34:54 compute-0 sudo[60386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:55 compute-0 python3.9[60388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:55 compute-0 sudo[60386]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:55 compute-0 sudo[60509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgvvdyfrpobgelmhzdryldanjjngvqzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013694.6037858-224-214058468494415/AnsiballZ_copy.py'
Dec 06 09:34:55 compute-0 sudo[60509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:55 compute-0 python3.9[60511]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013694.6037858-224-214058468494415/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:55 compute-0 sudo[60509]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:56 compute-0 sudo[60661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnlcbhdwounojmwwwmqkkvqjpuctmtbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013696.1175148-311-67226042488330/AnsiballZ_file.py'
Dec 06 09:34:56 compute-0 sudo[60661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:56 compute-0 python3.9[60663]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:56 compute-0 sudo[60661]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:57 compute-0 sudo[60813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgutcjroxzwbpnthwsbxtzxwaaekzgzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013696.9197714-335-236412990063071/AnsiballZ_stat.py'
Dec 06 09:34:57 compute-0 sudo[60813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:57 compute-0 python3.9[60815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:57 compute-0 sudo[60813]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:58 compute-0 sudo[60936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqbclswvqlhmzwnoegyehnmoqxyhmvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013696.9197714-335-236412990063071/AnsiballZ_copy.py'
Dec 06 09:34:58 compute-0 sudo[60936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:58 compute-0 python3.9[60938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013696.9197714-335-236412990063071/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:58 compute-0 sudo[60936]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:58 compute-0 sudo[61088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcmpbebbdhsuvuruzoyeudjouhjcmbfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013698.5876825-380-144554390355964/AnsiballZ_stat.py'
Dec 06 09:34:58 compute-0 sudo[61088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:59 compute-0 python3.9[61090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:59 compute-0 sudo[61088]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:59 compute-0 sudo[61211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fufnedwihtkgzqzgnsxmnizezcwsoiew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013698.5876825-380-144554390355964/AnsiballZ_copy.py'
Dec 06 09:34:59 compute-0 sudo[61211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:59 compute-0 python3.9[61213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013698.5876825-380-144554390355964/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:59 compute-0 sudo[61211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:00 compute-0 sudo[61363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffjaczxeyotzetppooepsdyuzhehujx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013700.1402261-425-257727333217546/AnsiballZ_systemd.py'
Dec 06 09:35:00 compute-0 sudo[61363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:01 compute-0 python3.9[61365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:01 compute-0 systemd[1]: Reloading.
Dec 06 09:35:01 compute-0 systemd-rc-local-generator[61391]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:01 compute-0 systemd-sysv-generator[61394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:01 compute-0 systemd[1]: Reloading.
Dec 06 09:35:01 compute-0 systemd-rc-local-generator[61432]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:01 compute-0 systemd-sysv-generator[61436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:01 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 06 09:35:01 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 06 09:35:01 compute-0 sudo[61363]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:02 compute-0 sudo[61592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqsiwafwmuditbvqkglhzdfculdcsyta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013702.0171874-449-257194567764607/AnsiballZ_stat.py'
Dec 06 09:35:02 compute-0 sudo[61592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:02 compute-0 python3.9[61594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:02 compute-0 sudo[61592]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:02 compute-0 sudo[61715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achuiwrurbiiirztuqlswsviwztgupub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013702.0171874-449-257194567764607/AnsiballZ_copy.py'
Dec 06 09:35:02 compute-0 sudo[61715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:03 compute-0 python3.9[61717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013702.0171874-449-257194567764607/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:03 compute-0 sudo[61715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:03 compute-0 sudo[61867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-privmifdvruadmkizanihxppkunepaas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013703.4697819-494-159921600893920/AnsiballZ_stat.py'
Dec 06 09:35:03 compute-0 sudo[61867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:04 compute-0 python3.9[61869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:04 compute-0 sudo[61867]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:04 compute-0 sudo[61990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwywrpjpxldgtybonbudxhhvcemkhxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013703.4697819-494-159921600893920/AnsiballZ_copy.py'
Dec 06 09:35:04 compute-0 sudo[61990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:04 compute-0 python3.9[61992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013703.4697819-494-159921600893920/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:04 compute-0 sudo[61990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:05 compute-0 sudo[62142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhdoivfoeuhvjrhcotqwyqfrnbmqawfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013704.9998493-539-136744455539658/AnsiballZ_systemd.py'
Dec 06 09:35:05 compute-0 sudo[62142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:05 compute-0 python3.9[62144]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:05 compute-0 systemd[1]: Reloading.
Dec 06 09:35:05 compute-0 systemd-rc-local-generator[62173]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:05 compute-0 systemd-sysv-generator[62177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:05 compute-0 systemd[1]: Reloading.
Dec 06 09:35:06 compute-0 systemd-rc-local-generator[62210]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:06 compute-0 systemd-sysv-generator[62215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:06 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 09:35:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:35:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:35:06 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 09:35:06 compute-0 sudo[62142]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:07 compute-0 python3.9[62370]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:35:07 compute-0 network[62387]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:35:07 compute-0 network[62388]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:35:07 compute-0 network[62389]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:35:13 compute-0 sudo[62649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczybuuqzehakytggwafuabjhmbapuyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013712.8998797-587-56243283503878/AnsiballZ_systemd.py'
Dec 06 09:35:13 compute-0 sudo[62649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:13 compute-0 python3.9[62651]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:13 compute-0 systemd[1]: Reloading.
Dec 06 09:35:13 compute-0 systemd-rc-local-generator[62681]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:13 compute-0 systemd-sysv-generator[62687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:13 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 06 09:35:14 compute-0 iptables.init[62691]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 06 09:35:14 compute-0 iptables.init[62691]: iptables: Flushing firewall rules: [  OK  ]
Dec 06 09:35:14 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 06 09:35:14 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 06 09:35:14 compute-0 sudo[62649]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:14 compute-0 sudo[62886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eloaqnnbfmtygvibzofnefhjozfpwhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013714.5122542-587-165826593774372/AnsiballZ_systemd.py'
Dec 06 09:35:14 compute-0 sudo[62886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:15 compute-0 python3.9[62888]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:16 compute-0 sudo[62886]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:17 compute-0 sudo[63040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpbxawkgafwxectxzjfybcawnacmlegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013716.6627982-635-221888761879695/AnsiballZ_systemd.py'
Dec 06 09:35:17 compute-0 sudo[63040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:17 compute-0 python3.9[63042]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:17 compute-0 systemd[1]: Reloading.
Dec 06 09:35:17 compute-0 systemd-sysv-generator[63073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:17 compute-0 systemd-rc-local-generator[63068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:17 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 06 09:35:17 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 06 09:35:17 compute-0 sudo[63040]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:18 compute-0 sudo[63232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpwecvjquknycrbdbtknzdxijjxapuvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013718.0480084-659-76907323923743/AnsiballZ_command.py'
Dec 06 09:35:18 compute-0 sudo[63232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:18 compute-0 python3.9[63234]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:18 compute-0 sudo[63232]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:19 compute-0 sudo[63385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnzhefvaunyiyrxezlbykgthofthwfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013719.481677-701-250506154001412/AnsiballZ_stat.py'
Dec 06 09:35:19 compute-0 sudo[63385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:20 compute-0 python3.9[63387]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:20 compute-0 sudo[63385]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:20 compute-0 sudo[63510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aanohwyjmcltgtlaivwxobkeoasmaisy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013719.481677-701-250506154001412/AnsiballZ_copy.py'
Dec 06 09:35:20 compute-0 sudo[63510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:20 compute-0 python3.9[63512]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013719.481677-701-250506154001412/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:20 compute-0 sudo[63510]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:21 compute-0 sudo[63663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbyqocylzcvrcanmfpmgkfjlanoxolow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013720.9782817-746-61580958026962/AnsiballZ_systemd.py'
Dec 06 09:35:21 compute-0 sudo[63663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:21 compute-0 python3.9[63665]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:35:22 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 06 09:35:22 compute-0 sshd[1005]: Received SIGHUP; restarting.
Dec 06 09:35:22 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 06 09:35:22 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 06 09:35:22 compute-0 sshd[1005]: Server listening on :: port 22.
Dec 06 09:35:22 compute-0 sudo[63663]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:23 compute-0 sudo[63819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetiqufbizrrbdfhkpbaqwbchcyctxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.0440617-770-74337064479673/AnsiballZ_file.py'
Dec 06 09:35:23 compute-0 sudo[63819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:23 compute-0 python3.9[63821]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:23 compute-0 sudo[63819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:24 compute-0 sudo[63971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lenucwsavbxevkbxmsyavbkuykofsjhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.8908403-794-209916110352718/AnsiballZ_stat.py'
Dec 06 09:35:24 compute-0 sudo[63971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:24 compute-0 python3.9[63973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:24 compute-0 sudo[63971]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:24 compute-0 sudo[64094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqduxxzuzwxoyvjtqnsrgdnozekfsxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.8908403-794-209916110352718/AnsiballZ_copy.py'
Dec 06 09:35:24 compute-0 sudo[64094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:25 compute-0 python3.9[64096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013723.8908403-794-209916110352718/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:25 compute-0 sudo[64094]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:26 compute-0 sudo[64246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-madlvycimvyntsoccabvkkephuhwzver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013725.6588492-848-208258343122825/AnsiballZ_timezone.py'
Dec 06 09:35:26 compute-0 sudo[64246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:26 compute-0 python3.9[64248]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 09:35:26 compute-0 systemd[1]: Starting Time & Date Service...
Dec 06 09:35:26 compute-0 systemd[1]: Started Time & Date Service.
Dec 06 09:35:26 compute-0 sudo[64246]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:28 compute-0 sudo[64402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jieoyxzdcubjgedshiokbapkmevlrspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013727.879982-875-35945434263555/AnsiballZ_file.py'
Dec 06 09:35:28 compute-0 sudo[64402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:28 compute-0 python3.9[64404]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:28 compute-0 sudo[64402]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:29 compute-0 sudo[64554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcxgjqydmidctfeuufemwwflbrjthcxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013728.649338-899-34963519501867/AnsiballZ_stat.py'
Dec 06 09:35:29 compute-0 sudo[64554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:29 compute-0 python3.9[64556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:29 compute-0 sudo[64554]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:29 compute-0 sudo[64677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxbjwbnpyhnltpnmvpziejwlxivvybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013728.649338-899-34963519501867/AnsiballZ_copy.py'
Dec 06 09:35:29 compute-0 sudo[64677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:29 compute-0 python3.9[64679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013728.649338-899-34963519501867/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:29 compute-0 sudo[64677]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:30 compute-0 sudo[64829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccroopmfwaqadcpnkdancmlsrpgsgyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013730.1750224-944-99278301765048/AnsiballZ_stat.py'
Dec 06 09:35:30 compute-0 sudo[64829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:30 compute-0 python3.9[64831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:30 compute-0 sudo[64829]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:31 compute-0 sudo[64952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trendiryuorrfwfbiashwnjmijwjybtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013730.1750224-944-99278301765048/AnsiballZ_copy.py'
Dec 06 09:35:31 compute-0 sudo[64952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:31 compute-0 python3.9[64954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013730.1750224-944-99278301765048/.source.yaml _original_basename=.vh4w45ip follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:31 compute-0 sudo[64952]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:32 compute-0 sudo[65104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpgrmjlnjghlytrjfdebzwgfsruhpll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013731.7116563-989-145585259462717/AnsiballZ_stat.py'
Dec 06 09:35:32 compute-0 sudo[65104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:32 compute-0 python3.9[65106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:32 compute-0 sudo[65104]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:32 compute-0 sudo[65227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwhrcyqvusagiomzwotesjpoxfsudpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013731.7116563-989-145585259462717/AnsiballZ_copy.py'
Dec 06 09:35:32 compute-0 sudo[65227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:32 compute-0 python3.9[65229]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013731.7116563-989-145585259462717/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:32 compute-0 sudo[65227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:33 compute-0 sudo[65379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjkmvtupfujowrhgextmrodyahefxlnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013733.164334-1034-258509244403565/AnsiballZ_command.py'
Dec 06 09:35:33 compute-0 sudo[65379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:33 compute-0 python3.9[65381]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:33 compute-0 sudo[65379]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:34 compute-0 sudo[65532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhgzjpsdvputfvxjsknurtoldetrlma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013733.9544828-1058-157680777433/AnsiballZ_command.py'
Dec 06 09:35:34 compute-0 sudo[65532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:34 compute-0 python3.9[65534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:34 compute-0 sudo[65532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:35 compute-0 sudo[65685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwcwrwwluntwvmrfqhfkkekkgicblrqh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013734.771153-1082-157230243109767/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 09:35:35 compute-0 sudo[65685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:35 compute-0 python3[65687]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:35:35 compute-0 sudo[65685]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:36 compute-0 sudo[65837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpbwuqozufejipsfdrjhyjrhngjacrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013735.7543647-1106-264876055125822/AnsiballZ_stat.py'
Dec 06 09:35:36 compute-0 sudo[65837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:36 compute-0 python3.9[65839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:36 compute-0 sudo[65837]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:36 compute-0 sudo[65960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrnrgdwyvpvigfvoihlaquvbwpprhytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013735.7543647-1106-264876055125822/AnsiballZ_copy.py'
Dec 06 09:35:36 compute-0 sudo[65960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:37 compute-0 python3.9[65962]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013735.7543647-1106-264876055125822/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:37 compute-0 sudo[65960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:37 compute-0 sudo[66112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqfiflrfsfuuqtpawuepqvcktlsjtms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013737.2825475-1151-218966978741015/AnsiballZ_stat.py'
Dec 06 09:35:37 compute-0 sudo[66112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:37 compute-0 python3.9[66114]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:37 compute-0 sudo[66112]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:38 compute-0 sudo[66235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfykcbrxocubczgxucdllnrznelbiza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013737.2825475-1151-218966978741015/AnsiballZ_copy.py'
Dec 06 09:35:38 compute-0 sudo[66235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:38 compute-0 python3.9[66237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013737.2825475-1151-218966978741015/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:38 compute-0 sudo[66235]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:39 compute-0 sudo[66387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfmhwdqxqldtjgbbnpbmddgagfjwfyjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013738.8889432-1196-219208165684260/AnsiballZ_stat.py'
Dec 06 09:35:39 compute-0 sudo[66387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:39 compute-0 python3.9[66389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:39 compute-0 sudo[66387]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:39 compute-0 sudo[66510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safvnfayrbovjddvvhkhymjwuilznmis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013738.8889432-1196-219208165684260/AnsiballZ_copy.py'
Dec 06 09:35:39 compute-0 sudo[66510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:40 compute-0 python3.9[66512]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013738.8889432-1196-219208165684260/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:40 compute-0 sudo[66510]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:40 compute-0 sudo[66662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcutumpigbdgbdoaralcxgnuffmcfier ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013740.4591594-1241-82489744974300/AnsiballZ_stat.py'
Dec 06 09:35:40 compute-0 sudo[66662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:41 compute-0 python3.9[66664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:41 compute-0 sudo[66662]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:41 compute-0 sudo[66785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exjnfnbitffgxqnrqdycpeanijznfgwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013740.4591594-1241-82489744974300/AnsiballZ_copy.py'
Dec 06 09:35:41 compute-0 sudo[66785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:41 compute-0 python3.9[66787]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013740.4591594-1241-82489744974300/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:41 compute-0 sudo[66785]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:42 compute-0 sudo[66937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyqsltmuvvzhyesovsmetujstgnfrlpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013741.928661-1286-154263044823477/AnsiballZ_stat.py'
Dec 06 09:35:42 compute-0 sudo[66937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:42 compute-0 python3.9[66939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:42 compute-0 sudo[66937]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:43 compute-0 sudo[67060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxlmshmtqfplkpbrnwdsmahyyyqhkaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013741.928661-1286-154263044823477/AnsiballZ_copy.py'
Dec 06 09:35:43 compute-0 sudo[67060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:43 compute-0 python3.9[67062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013741.928661-1286-154263044823477/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:43 compute-0 sudo[67060]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:43 compute-0 sudo[67212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgtgcdzssakxuxshbriameynqlnyidwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013743.484379-1331-17512181226071/AnsiballZ_file.py'
Dec 06 09:35:43 compute-0 sudo[67212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:43 compute-0 python3.9[67214]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:43 compute-0 sudo[67212]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:44 compute-0 sudo[67364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-difcniwwzfkmlipfrxnnplqulwykyfce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013744.2319129-1355-248381447122161/AnsiballZ_command.py'
Dec 06 09:35:44 compute-0 sudo[67364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:44 compute-0 python3.9[67366]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:44 compute-0 sudo[67364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:46 compute-0 sudo[67523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pziubpscfmbkbxsqzqfgvkdxgmcnzddh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013745.2696385-1379-267597399478779/AnsiballZ_blockinfile.py'
Dec 06 09:35:46 compute-0 sudo[67523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:46 compute-0 python3.9[67525]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:46 compute-0 sudo[67523]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:47 compute-0 sudo[67676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzbkbqfioryghumkezlpwiongwxmjrus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013747.3180003-1406-40876419490733/AnsiballZ_file.py'
Dec 06 09:35:47 compute-0 sudo[67676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:47 compute-0 python3.9[67678]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:47 compute-0 sudo[67676]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:48 compute-0 sudo[67828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfdsdwupdjiirevoobvdrjpsdferxdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013748.0856931-1406-236685818687966/AnsiballZ_file.py'
Dec 06 09:35:48 compute-0 sudo[67828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:48 compute-0 python3.9[67830]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:48 compute-0 sudo[67828]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:49 compute-0 sudo[67980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwuytehgfjazemrlhpigwoeivvcnppzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013748.970748-1451-117941248672367/AnsiballZ_mount.py'
Dec 06 09:35:49 compute-0 sudo[67980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:49 compute-0 python3.9[67982]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:35:49 compute-0 sudo[67980]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:50 compute-0 sudo[68133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjdusynfieaxtmewkxpgwqehkmvpitye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013750.0341318-1451-221231599024572/AnsiballZ_mount.py'
Dec 06 09:35:50 compute-0 sudo[68133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:50 compute-0 python3.9[68135]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:35:50 compute-0 sudo[68133]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:51 compute-0 sshd-session[58929]: Connection closed by 192.168.122.30 port 36242
Dec 06 09:35:51 compute-0 sshd-session[58926]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:35:51 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 06 09:35:51 compute-0 systemd[1]: session-14.scope: Consumed 43.686s CPU time.
Dec 06 09:35:51 compute-0 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Dec 06 09:35:51 compute-0 systemd-logind[795]: Removed session 14.
Dec 06 09:35:56 compute-0 sshd-session[68161]: Accepted publickey for zuul from 192.168.122.30 port 51248 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:35:56 compute-0 systemd-logind[795]: New session 15 of user zuul.
Dec 06 09:35:56 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 06 09:35:56 compute-0 sshd-session[68161]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:35:56 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 09:35:57 compute-0 sudo[68316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tekpgrruczbyduvwvayiwjskjazsnzcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013756.5948865-18-126980980277476/AnsiballZ_tempfile.py'
Dec 06 09:35:57 compute-0 sudo[68316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:57 compute-0 python3.9[68318]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 09:35:57 compute-0 sudo[68316]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:58 compute-0 sudo[68468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erhjvznyxqzwcycpliuypynxycbvsxmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013757.6089513-54-152331799993577/AnsiballZ_stat.py'
Dec 06 09:35:58 compute-0 sudo[68468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:58 compute-0 python3.9[68470]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:35:58 compute-0 sudo[68468]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:59 compute-0 sudo[68620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwtihypoajvbbdwhfksqrlfephgafifh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013758.5807083-84-101406187122513/AnsiballZ_setup.py'
Dec 06 09:35:59 compute-0 sudo[68620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:59 compute-0 python3.9[68622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:35:59 compute-0 sudo[68620]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:00 compute-0 sudo[68772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnneasermrcvrtzwhsikajalwxzzurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013759.7779374-109-17730519492065/AnsiballZ_blockinfile.py'
Dec 06 09:36:00 compute-0 sudo[68772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:00 compute-0 python3.9[68774]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvqYC0W0zPSX/plyJvm0q1VGDScYTNlcCdllukOe81JRfU3GhVusPZOX0xRSaLP/lmXtfqWcbBRCkLsmFrAo2EHn1CMqMr5WkhY4+rgApF+MGLDOUo57tlKZLPIwdL0SSY/Qv8lBfrqr7LUDZ7fTTTbqTzim/bncxg/u0KxSWBdvjfmYi13SwO65wDkFqSVYa3h8DNij6cRRjQ0fJuJ9Da860hmMnqo9GJMU6dq3zMXXn3YfuF4E4M0UQdlWmVW4EwBTzsfA1XYbSpW7VdRJw6esB4vZ9/Succj+XZiANoDqL9gXSEjNXVVWVbL/7aGJJF9LLQ3VVxmHdbYs1NcTI6Yy9d61zDJHnK/nlYHMhmAHxiDsZEpv0xF72LLzaI86xxvnbx4eUpnyW6LnKiUCYUAUrWIMpLiIbWUxeIoYmj9rqLhwlo5kCy7WdCYYEMTtGI53oIyU0EbXf/r4WAuzmqpVRPyc2Sd5tYD4aXh1JZLUcZy+NLR0Y4SA8RflKFcs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDJYF6pUvFgGUbY2QEOHAq7ZEhRQJUqPTVPOuTyb476
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJ19afQPeSMtr3O9L1fe5+bNzTAsOOCA5fLihUdryDYc29KKD+0XABHKIvqeefcCsIBjZRA//9OzCUftfvXK9A=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAiB67qk/R3IfGpcAH1Ojopc8KX94De+Kxs31cKQLD04X+4QRXPRdMxU85LOhN58eKoHaBi8cgqk7+dvRypGD5vbtbRN9r0VN7tGwiSQTlVFbEuhn0AEbnRwNAMWEEMHO9kEjufP4N2zEEhtQBXy9oO2tMX3+BX4Z3YZZMQyZUgohdBHp2VCul9VdRuo0oHSr8HHm0nN61dMjalnThmgkGAu5hG8qhkWT4i9hroSKBsR5kVBUFTqdXekYkVy4YIYfM2lBXiMOFHtvr1a+KOyIfgWMb7GBPW7oKqtzCfVgSbGaUhSvGzs1OWt3U/PjjapIlmDnwD5ukzVxWV5ldh0vA48tXh5R1wqAoN5/Y/RiAKaY2kd/fvtkhvVDGZluXOz5jJ02IFHm+v4dP3Ig8YOuS5BEkWFuJHkblW0t/+4siTHWwmGEuvUI6y8Gb2pGcBKsWCJtLePYzT09IAmrjwO0jAgbWy0nvCZ+SKlbBBrXP6OgNgMkA+GH9iGOl6FOuRok=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYNj3LmNvR0emoQHuuy9NKXPivs/dznunVy8GExnJl8
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJhKmGSvg8FMw16qKPzk6Pyj+OHkN3bmk20mts1PdCRcNRnn9sT1DgI6U8Aze1tjGPujT4eDL+Y9r/hsrfM4qDc=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDneZurSARwLaZA1xEymzXlvVAPvP8u0PCrqXuMYD5ewImDDChRITnk4XHKT/DUfrSJf9/7oJsddEbLRjhCtedqrMZsCkWz1BxtCmPBuvz2LfFhEn27TjqYLctOVGigQGsj6ILvPOzzLiapd93yApWDmH6P0un/ltmdM0iZLygNpzG3HLF8STBXzlo/8slci69Em7XppcrOpl1TS7DaVlpNcRQvo9pFuIrbMD9g0DOdMwk5YCH6g7OzGWqq0gt0YUOztmsqxWHKav3E0SXAD/vkgRc/1ZCNGFNSvf0dIgimCF3xlNWrppnvNgQ1BRqiQ7RArlOp1bVg0Ugdce6f4TIrq36Ois2U5+/myF5WQ7l9hRMRvoP64hSSsRAIDobTI/zMStUP3iZPFngxDxwQtpydHfFGywBL9811c42U7JsGxE8890uOIDk/oOkyhSH6KHQCPFjmKBJ98nT01lgnXyFSNOqds6QOYBasUWNFWd2wS7YpTheGlVVM8bk/gB4K2L0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMkn8zp09tRuEaH/bUoP0rYj+dziM1KcqMKxOgM9K1U
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrMdvJJYP0cflC7RDFsxwr66nSp9R7QU726CAfJcKLw6vHh8Z9Lw5wLH0kiaSpsb6SAPffloplHEDiwTOkghOc=
                                             create=True mode=0644 path=/tmp/ansible.b3twgp32 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:00 compute-0 sudo[68772]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:01 compute-0 sudo[68924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzojxrjjurtmdxthnbjppcgucpaqqwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013760.7165773-133-196187393591459/AnsiballZ_command.py'
Dec 06 09:36:01 compute-0 sudo[68924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:01 compute-0 python3.9[68926]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b3twgp32' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:01 compute-0 sudo[68924]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:02 compute-0 sudo[69078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxoeyduledoclxjotlxivolnxghmcbte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013761.8120189-157-167832904929693/AnsiballZ_file.py'
Dec 06 09:36:02 compute-0 sudo[69078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:02 compute-0 python3.9[69080]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b3twgp32 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:02 compute-0 sudo[69078]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:02 compute-0 sshd-session[68164]: Connection closed by 192.168.122.30 port 51248
Dec 06 09:36:02 compute-0 sshd-session[68161]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:36:02 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 06 09:36:02 compute-0 systemd[1]: session-15.scope: Consumed 4.219s CPU time.
Dec 06 09:36:02 compute-0 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Dec 06 09:36:02 compute-0 systemd-logind[795]: Removed session 15.
Dec 06 09:36:08 compute-0 sshd-session[69105]: Accepted publickey for zuul from 192.168.122.30 port 48388 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:36:08 compute-0 systemd-logind[795]: New session 16 of user zuul.
Dec 06 09:36:08 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 06 09:36:08 compute-0 sshd-session[69105]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:36:09 compute-0 python3.9[69258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:36:10 compute-0 sudo[69412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guupgfaxqwmtlmlvbesfnlmalydxzopv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013769.8920877-56-221968662044780/AnsiballZ_systemd.py'
Dec 06 09:36:10 compute-0 sudo[69412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:10 compute-0 python3.9[69414]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:36:10 compute-0 sudo[69412]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:11 compute-0 sudo[69566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pewwdybcosyxifimibdnhvwcgqowexcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013771.1791337-80-68999678744172/AnsiballZ_systemd.py'
Dec 06 09:36:11 compute-0 sudo[69566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:11 compute-0 python3.9[69568]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:36:11 compute-0 sudo[69566]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:12 compute-0 sudo[69719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubxgppkidvebrqxrusvkjsuyddboyrmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013772.2087274-107-32827602043535/AnsiballZ_command.py'
Dec 06 09:36:12 compute-0 sudo[69719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:12 compute-0 python3.9[69721]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:12 compute-0 sudo[69719]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:13 compute-0 sudo[69872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giivewwgwxwbmynmgjzuurymuembmfym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013773.2165926-131-244835518239638/AnsiballZ_stat.py'
Dec 06 09:36:13 compute-0 sudo[69872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:13 compute-0 python3.9[69874]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:13 compute-0 sudo[69872]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:14 compute-0 sudo[70026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbpvdrlbpgulbxydvfyuworyybtfqzwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013774.1351376-155-69546560701221/AnsiballZ_command.py'
Dec 06 09:36:14 compute-0 sudo[70026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:14 compute-0 python3.9[70028]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:14 compute-0 sudo[70026]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:15 compute-0 sudo[70181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhygtsdakyrbylqebyzbchjdyynpyiov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013774.948656-179-121235249842224/AnsiballZ_file.py'
Dec 06 09:36:15 compute-0 sudo[70181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:15 compute-0 python3.9[70183]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:15 compute-0 sudo[70181]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:16 compute-0 sshd-session[69108]: Connection closed by 192.168.122.30 port 48388
Dec 06 09:36:16 compute-0 sshd-session[69105]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:36:16 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 06 09:36:16 compute-0 systemd[1]: session-16.scope: Consumed 5.180s CPU time.
Dec 06 09:36:16 compute-0 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Dec 06 09:36:16 compute-0 systemd-logind[795]: Removed session 16.
Dec 06 09:36:21 compute-0 sshd-session[70208]: Accepted publickey for zuul from 192.168.122.30 port 45322 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:36:21 compute-0 systemd-logind[795]: New session 17 of user zuul.
Dec 06 09:36:21 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 06 09:36:21 compute-0 sshd-session[70208]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:36:22 compute-0 python3.9[70361]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:36:23 compute-0 sudo[70515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcsujorwzruszmydfoadoiohanjuzwqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013783.3201199-62-85990520190364/AnsiballZ_setup.py'
Dec 06 09:36:23 compute-0 sudo[70515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:23 compute-0 python3.9[70517]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:36:24 compute-0 sudo[70515]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:24 compute-0 sudo[70599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzuiejwjgpibsipixmenhgaljqykpqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013783.3201199-62-85990520190364/AnsiballZ_dnf.py'
Dec 06 09:36:24 compute-0 sudo[70599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:24 compute-0 python3.9[70601]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:36:26 compute-0 sudo[70599]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:27 compute-0 python3.9[70752]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:28 compute-0 python3.9[70903]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:36:29 compute-0 python3.9[71053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:36:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:36:30 compute-0 python3.9[71204]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:30 compute-0 sshd-session[70211]: Connection closed by 192.168.122.30 port 45322
Dec 06 09:36:30 compute-0 sshd-session[70208]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:36:30 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 06 09:36:30 compute-0 systemd[1]: session-17.scope: Consumed 6.214s CPU time.
Dec 06 09:36:30 compute-0 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Dec 06 09:36:30 compute-0 systemd-logind[795]: Removed session 17.
Dec 06 09:36:39 compute-0 chronyd[58445]: Selected source 23.133.168.246 (pool.ntp.org)
Dec 06 09:36:41 compute-0 sshd-session[71230]: Accepted publickey for zuul from 38.102.83.98 port 56388 ssh2: RSA SHA256:spwPcL19sPHC+yJA+ECEA4UNmpshOiR8KfgtTbViJeA
Dec 06 09:36:41 compute-0 systemd-logind[795]: New session 18 of user zuul.
Dec 06 09:36:41 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 06 09:36:42 compute-0 sshd-session[71230]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:36:42 compute-0 sudo[71306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulcjmofnmaygsyonpkaqoucrfjlvhezv ; /usr/bin/python3'
Dec 06 09:36:42 compute-0 sudo[71306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:42 compute-0 useradd[71310]: new group: name=ceph-admin, GID=42478
Dec 06 09:36:42 compute-0 useradd[71310]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 06 09:36:42 compute-0 sudo[71306]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:43 compute-0 sudo[71392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zomelrmoncvqeymgnxedwqpvdsbxguuo ; /usr/bin/python3'
Dec 06 09:36:43 compute-0 sudo[71392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:43 compute-0 sudo[71392]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:43 compute-0 sudo[71465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uberwgsbnqudvpivxahzecbkbkorwtyq ; /usr/bin/python3'
Dec 06 09:36:43 compute-0 sudo[71465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:43 compute-0 sudo[71465]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:44 compute-0 sudo[71515]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxsjgnhowqatkttchfbzpqhbgnmefard ; /usr/bin/python3'
Dec 06 09:36:44 compute-0 sudo[71515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:45 compute-0 sudo[71515]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:45 compute-0 sudo[71541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmenmhepfgfkdwlmndbczmkonpedfer ; /usr/bin/python3'
Dec 06 09:36:45 compute-0 sudo[71541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:45 compute-0 sudo[71541]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:45 compute-0 sudo[71567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyunldmosfmysyvvlamirintmalkqui ; /usr/bin/python3'
Dec 06 09:36:45 compute-0 sudo[71567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:46 compute-0 sudo[71567]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:46 compute-0 sudo[71593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azygszjbggbrgrnkeuocpavtnmlvoqvt ; /usr/bin/python3'
Dec 06 09:36:46 compute-0 sudo[71593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:46 compute-0 sudo[71593]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:47 compute-0 sudo[71671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrduzvncexuidudogwsyyqjmdeqrdywj ; /usr/bin/python3'
Dec 06 09:36:47 compute-0 sudo[71671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:47 compute-0 sudo[71671]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:47 compute-0 sudo[71744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjulfzvreurejqyqfxukcmxjkpiewwtn ; /usr/bin/python3'
Dec 06 09:36:47 compute-0 sudo[71744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:47 compute-0 sudo[71744]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:48 compute-0 sudo[71846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttopuyexqiteewqrdwtbrrelzawasinq ; /usr/bin/python3'
Dec 06 09:36:48 compute-0 sudo[71846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:48 compute-0 sudo[71846]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:48 compute-0 sudo[71919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjjwbkcpxbemmhgfnqtnrahqxlcvrksd ; /usr/bin/python3'
Dec 06 09:36:48 compute-0 sudo[71919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:48 compute-0 sudo[71919]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:49 compute-0 sudo[71969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbfxoqccvoztexyttxbvpgdioabxsls ; /usr/bin/python3'
Dec 06 09:36:49 compute-0 sudo[71969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:49 compute-0 python3[71971]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:36:50 compute-0 sudo[71969]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:51 compute-0 sudo[72064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovssjcyfymxkcgtkstqlrcxligscazt ; /usr/bin/python3'
Dec 06 09:36:51 compute-0 sudo[72064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:51 compute-0 python3[72066]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 09:36:52 compute-0 sudo[72064]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:52 compute-0 sudo[72091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naeizgkqhqsrsrododnbjmclkhznbbjj ; /usr/bin/python3'
Dec 06 09:36:52 compute-0 sudo[72091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:53 compute-0 python3[72093]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:36:53 compute-0 sudo[72091]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:53 compute-0 sudo[72117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upprnnyzhspwsptxdsocbouzyjftqimy ; /usr/bin/python3'
Dec 06 09:36:53 compute-0 sudo[72117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:53 compute-0 python3[72119]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:53 compute-0 kernel: loop: module loaded
Dec 06 09:36:53 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 06 09:36:53 compute-0 sudo[72117]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:53 compute-0 sudo[72152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzclimbtfmgkvbcqtxyvzrfxhtvwwnwq ; /usr/bin/python3'
Dec 06 09:36:53 compute-0 sudo[72152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:53 compute-0 python3[72154]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:53 compute-0 lvm[72157]: PV /dev/loop3 not used.
Dec 06 09:36:54 compute-0 lvm[72166]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:36:54 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 06 09:36:54 compute-0 lvm[72168]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 06 09:36:54 compute-0 sudo[72152]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:54 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 06 09:36:54 compute-0 sudo[72244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evzbcrogfbuwxdqezjezphofemfrwtrg ; /usr/bin/python3'
Dec 06 09:36:54 compute-0 sudo[72244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:54 compute-0 python3[72246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:36:54 compute-0 sudo[72244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:54 compute-0 sudo[72317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvddxoxkhvdhtyjherxkvgjkhbwigxc ; /usr/bin/python3'
Dec 06 09:36:54 compute-0 sudo[72317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:55 compute-0 python3[72319]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013814.3299854-36827-261843508893121/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:55 compute-0 sudo[72317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:55 compute-0 sudo[72367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmkrvqrifbteyvvcydypbpskuuyxwat ; /usr/bin/python3'
Dec 06 09:36:55 compute-0 sudo[72367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:55 compute-0 python3[72369]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:36:55 compute-0 systemd[1]: Reloading.
Dec 06 09:36:55 compute-0 systemd-rc-local-generator[72394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:55 compute-0 systemd-sysv-generator[72397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:56 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 06 09:36:56 compute-0 bash[72409]: /dev/loop3: [64513]:4327963 (/var/lib/ceph-osd-0.img)
Dec 06 09:36:56 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 06 09:36:56 compute-0 lvm[72410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:36:56 compute-0 lvm[72410]: VG ceph_vg0 finished
Dec 06 09:36:56 compute-0 sudo[72367]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:58 compute-0 python3[72434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:37:00 compute-0 sudo[72525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-markdcgojdrtlnwqkukxjtjbssiwyeyh ; /usr/bin/python3'
Dec 06 09:37:00 compute-0 sudo[72525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:00 compute-0 python3[72527]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 09:37:03 compute-0 sudo[72525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:03 compute-0 sudo[72582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifaapxglmxvptyamvqhfljszcxzjlpo ; /usr/bin/python3'
Dec 06 09:37:03 compute-0 sudo[72582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:03 compute-0 python3[72584]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 09:37:06 compute-0 groupadd[72594]: group added to /etc/group: name=cephadm, GID=992
Dec 06 09:37:06 compute-0 groupadd[72594]: group added to /etc/gshadow: name=cephadm
Dec 06 09:37:06 compute-0 groupadd[72594]: new group: name=cephadm, GID=992
Dec 06 09:37:06 compute-0 useradd[72601]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 06 09:37:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:37:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:37:06 compute-0 sudo[72582]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:37:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:37:07 compute-0 systemd[1]: run-re7dc9dd4ed464e24bc713549f486bb07.service: Deactivated successfully.
Dec 06 09:37:07 compute-0 sudo[72697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbixnxwjtqgwriwajwsrdsqxvjbnodqd ; /usr/bin/python3'
Dec 06 09:37:07 compute-0 sudo[72697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:07 compute-0 python3[72699]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:37:07 compute-0 sudo[72697]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:07 compute-0 sudo[72725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtrzkuqqxisxclwubojrqbqagekimbj ; /usr/bin/python3'
Dec 06 09:37:07 compute-0 sudo[72725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:07 compute-0 python3[72727]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:07 compute-0 sudo[72725]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:08 compute-0 sudo[72789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjfkyvlqnrehsvjlixstaipwdtomwfkq ; /usr/bin/python3'
Dec 06 09:37:08 compute-0 sudo[72789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:08 compute-0 python3[72791]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:08 compute-0 sudo[72789]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:08 compute-0 sudo[72815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwnbsjrkzugyssezgwgqnretkeiwdxg ; /usr/bin/python3'
Dec 06 09:37:08 compute-0 sudo[72815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:08 compute-0 python3[72817]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:08 compute-0 sudo[72815]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:09 compute-0 sudo[72893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isipwvkoabhhwnhyfuxbcdqaopbilsne ; /usr/bin/python3'
Dec 06 09:37:09 compute-0 sudo[72893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:09 compute-0 python3[72895]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:37:09 compute-0 sudo[72893]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:09 compute-0 sudo[72966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhctwihcexpzzqvehngdmzahbglmvrhg ; /usr/bin/python3'
Dec 06 09:37:09 compute-0 sudo[72966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:10 compute-0 python3[72968]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013829.3170278-37019-36960213508375/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:10 compute-0 sudo[72966]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:10 compute-0 sudo[73068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhnsoryhgdsoyoocgdsovhjeyyfnexzh ; /usr/bin/python3'
Dec 06 09:37:10 compute-0 sudo[73068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:10 compute-0 python3[73070]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:37:10 compute-0 sudo[73068]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:11 compute-0 sudo[73141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqclrhbrencdvwcqqdrzxasksebagqhg ; /usr/bin/python3'
Dec 06 09:37:11 compute-0 sudo[73141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:11 compute-0 python3[73143]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013830.5843883-37037-38814438970557/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:11 compute-0 sudo[73141]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:11 compute-0 sudo[73191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwffctwdazodllytcbcbgvtojwbkahay ; /usr/bin/python3'
Dec 06 09:37:11 compute-0 sudo[73191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:11 compute-0 python3[73193]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:37:11 compute-0 sudo[73191]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:11 compute-0 sudo[73219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkcxlamvllmvbcdhzqfessenhczmjobm ; /usr/bin/python3'
Dec 06 09:37:11 compute-0 sudo[73219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:12 compute-0 python3[73221]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:37:12 compute-0 sudo[73219]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:12 compute-0 sudo[73247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvieauhkbvcsflltdociaolzcaczoece ; /usr/bin/python3'
Dec 06 09:37:12 compute-0 sudo[73247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:12 compute-0 python3[73249]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:37:12 compute-0 sudo[73247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:12 compute-0 sudo[73275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkynltbmsldctljnshglnjsbcvwqsep ; /usr/bin/python3'
Dec 06 09:37:12 compute-0 sudo[73275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:12 compute-0 python3[73277]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:13 compute-0 sshd-session[73281]: Accepted publickey for ceph-admin from 192.168.122.100 port 41070 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:37:13 compute-0 systemd-logind[795]: New session 19 of user ceph-admin.
Dec 06 09:37:13 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 09:37:13 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 09:37:13 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 09:37:13 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 06 09:37:13 compute-0 systemd[73285]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:37:13 compute-0 systemd[73285]: Queued start job for default target Main User Target.
Dec 06 09:37:13 compute-0 systemd[73285]: Created slice User Application Slice.
Dec 06 09:37:13 compute-0 systemd[73285]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:37:13 compute-0 systemd[73285]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:37:13 compute-0 systemd[73285]: Reached target Paths.
Dec 06 09:37:13 compute-0 systemd[73285]: Reached target Timers.
Dec 06 09:37:13 compute-0 systemd[73285]: Starting D-Bus User Message Bus Socket...
Dec 06 09:37:13 compute-0 systemd[73285]: Starting Create User's Volatile Files and Directories...
Dec 06 09:37:13 compute-0 systemd[73285]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:37:13 compute-0 systemd[73285]: Reached target Sockets.
Dec 06 09:37:13 compute-0 systemd[73285]: Finished Create User's Volatile Files and Directories.
Dec 06 09:37:13 compute-0 systemd[73285]: Reached target Basic System.
Dec 06 09:37:13 compute-0 systemd[73285]: Reached target Main User Target.
Dec 06 09:37:13 compute-0 systemd[73285]: Startup finished in 128ms.
Dec 06 09:37:13 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 06 09:37:13 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 06 09:37:13 compute-0 sshd-session[73281]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:37:13 compute-0 sudo[73301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 06 09:37:13 compute-0 sudo[73301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:37:13 compute-0 sudo[73301]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:13 compute-0 sshd-session[73300]: Received disconnect from 192.168.122.100 port 41070:11: disconnected by user
Dec 06 09:37:13 compute-0 sshd-session[73300]: Disconnected from user ceph-admin 192.168.122.100 port 41070
Dec 06 09:37:13 compute-0 sshd-session[73281]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:37:13 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 06 09:37:13 compute-0 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Dec 06 09:37:13 compute-0 systemd-logind[795]: Removed session 19.
Dec 06 09:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2012127482-lower\x2dmapped.mount: Deactivated successfully.
Dec 06 09:37:23 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 06 09:37:23 compute-0 systemd[73285]: Activating special unit Exit the Session...
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped target Main User Target.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped target Basic System.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped target Paths.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped target Sockets.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped target Timers.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 09:37:23 compute-0 systemd[73285]: Closed D-Bus User Message Bus Socket.
Dec 06 09:37:23 compute-0 systemd[73285]: Stopped Create User's Volatile Files and Directories.
Dec 06 09:37:23 compute-0 systemd[73285]: Removed slice User Application Slice.
Dec 06 09:37:23 compute-0 systemd[73285]: Reached target Shutdown.
Dec 06 09:37:23 compute-0 systemd[73285]: Finished Exit the Session.
Dec 06 09:37:23 compute-0 systemd[73285]: Reached target Exit the Session.
Dec 06 09:37:23 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 06 09:37:23 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 06 09:37:23 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 06 09:37:23 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 06 09:37:23 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 06 09:37:23 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 06 09:37:23 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 06 09:37:33 compute-0 podman[73378]: 2025-12-06 09:37:33.687801094 +0000 UTC m=+19.703466437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:33 compute-0 podman[73446]: 2025-12-06 09:37:33.792314542 +0000 UTC m=+0.070313437 container create 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:37:33 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 06 09:37:33 compute-0 systemd[1]: Started libpod-conmon-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope.
Dec 06 09:37:33 compute-0 podman[73446]: 2025-12-06 09:37:33.762541117 +0000 UTC m=+0.040540092 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:33 compute-0 podman[73446]: 2025-12-06 09:37:33.920746267 +0000 UTC m=+0.198745232 container init 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:37:33 compute-0 podman[73446]: 2025-12-06 09:37:33.932207863 +0000 UTC m=+0.210206768 container start 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Dec 06 09:37:33 compute-0 podman[73446]: 2025-12-06 09:37:33.936718844 +0000 UTC m=+0.214717799 container attach 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:37:34 compute-0 brave_boyd[73463]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 06 09:37:34 compute-0 systemd[1]: libpod-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73446]: 2025-12-06 09:37:34.065348983 +0000 UTC m=+0.343347908 container died 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c214ddb913c3742d07df8ad18e863b695be8c3163b5e069e27c9a2e5f315238-merged.mount: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73446]: 2025-12-06 09:37:34.123099014 +0000 UTC m=+0.401097919 container remove 8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474 (image=quay.io/ceph/ceph:v19, name=brave_boyd, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:34 compute-0 systemd[1]: libpod-conmon-8eca77df08563d867e67a36d37136e8b2bb7542f6e6b7269e5433c3254df3474.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73478]: 2025-12-06 09:37:34.222562417 +0000 UTC m=+0.067686577 container create 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:34 compute-0 systemd[1]: Started libpod-conmon-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope.
Dec 06 09:37:34 compute-0 podman[73478]: 2025-12-06 09:37:34.194641183 +0000 UTC m=+0.039765343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:34 compute-0 podman[73478]: 2025-12-06 09:37:34.312984069 +0000 UTC m=+0.158108239 container init 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:37:34 compute-0 podman[73478]: 2025-12-06 09:37:34.32239689 +0000 UTC m=+0.167521050 container start 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:34 compute-0 happy_murdock[73495]: 167 167
Dec 06 09:37:34 compute-0 podman[73478]: 2025-12-06 09:37:34.326795907 +0000 UTC m=+0.171920067 container attach 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 09:37:34 compute-0 systemd[1]: libpod-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73500]: 2025-12-06 09:37:34.376301247 +0000 UTC m=+0.030973737 container died 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 09:37:34 compute-0 podman[73500]: 2025-12-06 09:37:34.421533854 +0000 UTC m=+0.076206314 container remove 9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586 (image=quay.io/ceph/ceph:v19, name=happy_murdock, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:37:34 compute-0 systemd[1]: libpod-conmon-9883abc7f76662f493d129bbaff4ae37b7fc180941d6205ffcc9d07ef6f51586.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.525248251 +0000 UTC m=+0.064631435 container create 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:34 compute-0 systemd[1]: Started libpod-conmon-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope.
Dec 06 09:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.498272991 +0000 UTC m=+0.037656245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.604784552 +0000 UTC m=+0.144167776 container init 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.614261314 +0000 UTC m=+0.153644518 container start 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.618878207 +0000 UTC m=+0.158261411 container attach 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:34 compute-0 trusting_jang[73531]: AQBe+TNpCJmxJhAAKualQfzBdNpHXqNpZCg4iA==
Dec 06 09:37:34 compute-0 systemd[1]: libpod-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.654400525 +0000 UTC m=+0.193783729 container died 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:34 compute-0 podman[73515]: 2025-12-06 09:37:34.705069567 +0000 UTC m=+0.244452761 container remove 1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6 (image=quay.io/ceph/ceph:v19, name=trusting_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:34 compute-0 systemd[1]: libpod-conmon-1b168e69551094bf110fe92119c5572c0697d4651a2afc59b60bfe4c5b4b63e6.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.806739108 +0000 UTC m=+0.068954630 container create d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:34 compute-0 systemd[1]: Started libpod-conmon-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope.
Dec 06 09:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.779987355 +0000 UTC m=+0.042202937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.882813847 +0000 UTC m=+0.145029439 container init d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.88893443 +0000 UTC m=+0.151149952 container start d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.893175353 +0000 UTC m=+0.155390895 container attach d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Dec 06 09:37:34 compute-0 nice_hermann[73566]: AQBe+TNpeHp/NhAAz2vYIGRSDKK5iPaaLsWobw==
Dec 06 09:37:34 compute-0 systemd[1]: libpod-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope: Deactivated successfully.
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.920680717 +0000 UTC m=+0.182896219 container died d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:37:34 compute-0 podman[73551]: 2025-12-06 09:37:34.963199811 +0000 UTC m=+0.225415313 container remove d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee (image=quay.io/ceph/ceph:v19, name=nice_hermann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:34 compute-0 systemd[1]: libpod-conmon-d01a4ed04b39c9882b7d49c87d2ae659cad9113b8486d1928e479c164711c2ee.scope: Deactivated successfully.
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.050361956 +0000 UTC m=+0.059704193 container create 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:35 compute-0 systemd[1]: Started libpod-conmon-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope.
Dec 06 09:37:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.023379996 +0000 UTC m=+0.032722283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.660644273 +0000 UTC m=+0.669986560 container init 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.667385293 +0000 UTC m=+0.676727540 container start 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 09:37:35 compute-0 happy_torvalds[73603]: AQBf+TNpklaXKRAAjaDdl7bhP4qVZ4du0ZTu+Q==
Dec 06 09:37:35 compute-0 systemd[1]: libpod-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope: Deactivated successfully.
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.971567106 +0000 UTC m=+0.980909413 container attach 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:35 compute-0 podman[73587]: 2025-12-06 09:37:35.972235934 +0000 UTC m=+0.981578171 container died 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84ace5647d6ef5214854da59463f86833e8d4ba258fe7a735a40321daded3bc-merged.mount: Deactivated successfully.
Dec 06 09:37:38 compute-0 podman[73587]: 2025-12-06 09:37:38.450959056 +0000 UTC m=+3.460301303 container remove 7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4 (image=quay.io/ceph/ceph:v19, name=happy_torvalds, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:38 compute-0 systemd[1]: libpod-conmon-7cbe51f3d77610baefd0b323b3a2751062d460fd6021961c0099bb02d93fc4c4.scope: Deactivated successfully.
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.535539682 +0000 UTC m=+0.054509104 container create 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:38 compute-0 systemd[1]: Started libpod-conmon-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope.
Dec 06 09:37:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf53142a05666f10abd497e82241d956ccc22802545d416deaf2058a4028d4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.606057903 +0000 UTC m=+0.125027365 container init 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.517999434 +0000 UTC m=+0.036968866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.615036983 +0000 UTC m=+0.134006425 container start 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.619373018 +0000 UTC m=+0.138342480 container attach 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:37:38 compute-0 admiring_rhodes[73641]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 06 09:37:38 compute-0 admiring_rhodes[73641]: setting min_mon_release = quincy
Dec 06 09:37:38 compute-0 admiring_rhodes[73641]: /usr/bin/monmaptool: set fsid to 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:38 compute-0 admiring_rhodes[73641]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 06 09:37:38 compute-0 systemd[1]: libpod-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope: Deactivated successfully.
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.668682734 +0000 UTC m=+0.187652206 container died 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:37:38 compute-0 podman[73625]: 2025-12-06 09:37:38.716821267 +0000 UTC m=+0.235790689 container remove 4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e (image=quay.io/ceph/ceph:v19, name=admiring_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:38 compute-0 systemd[1]: libpod-conmon-4847092287bf8ddf9edb594d9e0ea080e4e50d62a49ebd9b351c525474b0270e.scope: Deactivated successfully.
Dec 06 09:37:38 compute-0 podman[73662]: 2025-12-06 09:37:38.802352758 +0000 UTC m=+0.057191216 container create 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:38 compute-0 systemd[1]: Started libpod-conmon-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope.
Dec 06 09:37:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994fddadcfeecd4e1bc53d6c4fde02686373badafe0ae1b6de7ecde39276a8f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:38 compute-0 podman[73662]: 2025-12-06 09:37:38.781252666 +0000 UTC m=+0.036091154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:38 compute-0 podman[73662]: 2025-12-06 09:37:38.952021131 +0000 UTC m=+0.206859609 container init 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:38 compute-0 podman[73662]: 2025-12-06 09:37:38.960917158 +0000 UTC m=+0.215755626 container start 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:37:38 compute-0 podman[73662]: 2025-12-06 09:37:38.965296085 +0000 UTC m=+0.220134553 container attach 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:39 compute-0 systemd[1]: libpod-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope: Deactivated successfully.
Dec 06 09:37:39 compute-0 podman[73662]: 2025-12-06 09:37:39.129823252 +0000 UTC m=+0.384661720 container died 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:39 compute-0 podman[73662]: 2025-12-06 09:37:39.181149952 +0000 UTC m=+0.435988410 container remove 50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 09:37:39 compute-0 systemd[1]: libpod-conmon-50b875578ab05bcf9fbafce14857295d22a4ccc288133f31b5ac90a097dcf6b4.scope: Deactivated successfully.
Dec 06 09:37:39 compute-0 systemd[1]: Reloading.
Dec 06 09:37:39 compute-0 systemd-sysv-generator[73745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:39 compute-0 systemd-rc-local-generator[73737]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbcf53142a05666f10abd497e82241d956ccc22802545d416deaf2058a4028d4-merged.mount: Deactivated successfully.
Dec 06 09:37:39 compute-0 systemd[1]: Reloading.
Dec 06 09:37:39 compute-0 systemd-sysv-generator[73787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:39 compute-0 systemd-rc-local-generator[73783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:39 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 06 09:37:39 compute-0 systemd[1]: Reloading.
Dec 06 09:37:39 compute-0 systemd-sysv-generator[73825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:39 compute-0 systemd-rc-local-generator[73820]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:40 compute-0 systemd[1]: Reached target Ceph cluster 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:40 compute-0 systemd[1]: Reloading.
Dec 06 09:37:40 compute-0 systemd-rc-local-generator[73860]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:40 compute-0 systemd-sysv-generator[73863]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:40 compute-0 systemd[1]: Reloading.
Dec 06 09:37:40 compute-0 systemd-sysv-generator[73904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:40 compute-0 systemd-rc-local-generator[73901]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:40 compute-0 systemd[1]: Created slice Slice /system/ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:40 compute-0 systemd[1]: Reached target System Time Set.
Dec 06 09:37:40 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 06 09:37:40 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:41 compute-0 podman[73958]: 2025-12-06 09:37:41.047966283 +0000 UTC m=+0.064155732 container create 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:37:41 compute-0 podman[73958]: 2025-12-06 09:37:41.02010556 +0000 UTC m=+0.036295039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 podman[73958]: 2025-12-06 09:37:41.173046929 +0000 UTC m=+0.189236398 container init 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:37:41 compute-0 podman[73958]: 2025-12-06 09:37:41.179978193 +0000 UTC m=+0.196167642 container start 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:37:41 compute-0 bash[73958]: 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992
Dec 06 09:37:41 compute-0 systemd[1]: Started Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:41 compute-0 ceph-mon[73977]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: pidfile_write: ignore empty --pid-file
Dec 06 09:37:41 compute-0 ceph-mon[73977]: load: jerasure load: lrc 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: RocksDB version: 7.9.2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Git sha 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: DB SUMMARY
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: DB Session ID:  0ZQHI2PX756UQLPOWVHK
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: CURRENT file:  CURRENT
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                         Options.error_if_exists: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.create_if_missing: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                                     Options.env: 0x55dcf3a48c20
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                                Options.info_log: 0x55dcf5242d60
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                              Options.statistics: (nil)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                               Options.use_fsync: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                              Options.db_log_dir: 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                                 Options.wal_dir: 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                    Options.write_buffer_manager: 0x55dcf5247900
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.unordered_write: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                               Options.row_cache: None
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                              Options.wal_filter: None
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.two_write_queues: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.wal_compression: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.atomic_flush: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.max_background_jobs: 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.max_background_compactions: -1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.max_subcompactions: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                          Options.max_open_files: -1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Compression algorithms supported:
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kZSTD supported: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kXpressCompression supported: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kBZip2Compression supported: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kLZ4Compression supported: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kZlibCompression supported: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         kSnappyCompression supported: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:           Options.merge_operator: 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:        Options.compaction_filter: None
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dcf5242500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55dcf5267350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.compression: NoCompression
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.num_levels: 7
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 423e8366-3852-4d2b-aa53-87abab31aff3
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861239324, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861241646, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "0ZQHI2PX756UQLPOWVHK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013861241797, "job": 1, "event": "recovery_finished"}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dcf5268e00
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: DB pointer 0x55dcf5372000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:37:41 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dcf5267350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 09:37:41 compute-0 ceph-mon[73977]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@-1(???) e0 preinit fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 06 09:37:41 compute-0 ceph-mon[73977]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 06 09:37:41 compute-0 ceph-mon[73977]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 new map
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-06T09:37:41:285728+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mkfs 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.308004638 +0000 UTC m=+0.069797323 container create afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:41 compute-0 systemd[1]: Started libpod-conmon-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope.
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.283903505 +0000 UTC m=+0.045696210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d489809cc4b9cef907666b23badca89098a2534987fb9d062d0ccca71d096c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.430949377 +0000 UTC m=+0.192742112 container init afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.44342531 +0000 UTC m=+0.205217965 container start afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.447044266 +0000 UTC m=+0.208837021 container attach afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 06 09:37:41 compute-0 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501861568' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 09:37:41 compute-0 strange_joliot[74032]:   cluster:
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     id:     5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     health: HEALTH_OK
Dec 06 09:37:41 compute-0 strange_joliot[74032]:  
Dec 06 09:37:41 compute-0 strange_joliot[74032]:   services:
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     mon: 1 daemons, quorum compute-0 (age 0.403165s)
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     mgr: no daemons active
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     osd: 0 osds: 0 up, 0 in
Dec 06 09:37:41 compute-0 strange_joliot[74032]:  
Dec 06 09:37:41 compute-0 strange_joliot[74032]:   data:
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     pools:   0 pools, 0 pgs
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     objects: 0 objects, 0 B
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     usage:   0 B used, 0 B / 0 B avail
Dec 06 09:37:41 compute-0 strange_joliot[74032]:     pgs:     
Dec 06 09:37:41 compute-0 strange_joliot[74032]:  
Dec 06 09:37:41 compute-0 systemd[1]: libpod-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope: Deactivated successfully.
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.699931882 +0000 UTC m=+0.461724557 container died afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:37:41 compute-0 podman[73978]: 2025-12-06 09:37:41.744165361 +0000 UTC m=+0.505958056 container remove afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b (image=quay.io/ceph/ceph:v19, name=strange_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:41 compute-0 systemd[1]: libpod-conmon-afa0b6b01455aa4f63a13f8cfa93ca2783e8097953bf2ae77b42ff94a4f5b91b.scope: Deactivated successfully.
Dec 06 09:37:41 compute-0 podman[74070]: 2025-12-06 09:37:41.845537665 +0000 UTC m=+0.066085574 container create 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:41 compute-0 systemd[1]: Started libpod-conmon-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope.
Dec 06 09:37:41 compute-0 podman[74070]: 2025-12-06 09:37:41.8175882 +0000 UTC m=+0.038136139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:41 compute-0 podman[74070]: 2025-12-06 09:37:41.938358221 +0000 UTC m=+0.158906200 container init 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:37:41 compute-0 podman[74070]: 2025-12-06 09:37:41.94356371 +0000 UTC m=+0.164111639 container start 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:37:41 compute-0 podman[74070]: 2025-12-06 09:37:41.94773411 +0000 UTC m=+0.168282099 container attach 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:37:42 compute-0 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 09:37:42 compute-0 crazy_tu[74088]: 
Dec 06 09:37:42 compute-0 crazy_tu[74088]: [global]
Dec 06 09:37:42 compute-0 crazy_tu[74088]:         fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:42 compute-0 crazy_tu[74088]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 06 09:37:42 compute-0 systemd[1]: libpod-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope: Deactivated successfully.
Dec 06 09:37:42 compute-0 podman[74070]: 2025-12-06 09:37:42.140152093 +0000 UTC m=+0.360699982 container died 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc1d9a737edb97c2a3bcbfb5f0a04c1871ec790227a3af0cbb7965ef7dcf54da-merged.mount: Deactivated successfully.
Dec 06 09:37:42 compute-0 podman[74070]: 2025-12-06 09:37:42.177809377 +0000 UTC m=+0.398357266 container remove 9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd (image=quay.io/ceph/ceph:v19, name=crazy_tu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 06 09:37:42 compute-0 systemd[1]: libpod-conmon-9b5ac2484ef53f2352423b0eedf4e2db50b64b2bbfeec32ede0e831f9c7f9ddd.scope: Deactivated successfully.
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.260046981 +0000 UTC m=+0.050224641 container create ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:42 compute-0 systemd[1]: Started libpod-conmon-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope.
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: monmap epoch 1
Dec 06 09:37:42 compute-0 ceph-mon[73977]: fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:42 compute-0 ceph-mon[73977]: last_changed 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:42 compute-0 ceph-mon[73977]: created 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:42 compute-0 ceph-mon[73977]: min_mon_release 19 (squid)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: election_strategy: 1
Dec 06 09:37:42 compute-0 ceph-mon[73977]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:37:42 compute-0 ceph-mon[73977]: fsmap 
Dec 06 09:37:42 compute-0 ceph-mon[73977]: osdmap e1: 0 total, 0 up, 0 in
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mgrmap e1: no daemons active
Dec 06 09:37:42 compute-0 ceph-mon[73977]: from='client.? 192.168.122.100:0/2501861568' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 09:37:42 compute-0 ceph-mon[73977]: from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:37:42 compute-0 ceph-mon[73977]: from='client.? 192.168.122.100:0/659844819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.235505076 +0000 UTC m=+0.025682736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.36873395 +0000 UTC m=+0.158911660 container init ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.375458039 +0000 UTC m=+0.165635679 container start ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.378847759 +0000 UTC m=+0.169025449 container attach ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:37:42 compute-0 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429756443' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:37:42 compute-0 systemd[1]: libpod-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope: Deactivated successfully.
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.611004161 +0000 UTC m=+0.401181861 container died ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e8cdfd11464fda0c66a843e54b7fada50deb59fc6e0c9a41177eaa7f2258e3-merged.mount: Deactivated successfully.
Dec 06 09:37:42 compute-0 podman[74125]: 2025-12-06 09:37:42.675904652 +0000 UTC m=+0.466082292 container remove ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a (image=quay.io/ceph/ceph:v19, name=stoic_boyd, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:37:42 compute-0 systemd[1]: libpod-conmon-ef6f75262d39e5564cee3a298165ec3d82bc68ca77d31784df0bb35d3801bc6a.scope: Deactivated successfully.
Dec 06 09:37:42 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:37:42 compute-0 ceph-mon[73977]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 06 09:37:42 compute-0 ceph-mon[73977]: mon.compute-0@0(leader) e1 shutdown
Dec 06 09:37:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[73973]: 2025-12-06T09:37:42.970+0000 7fa609da7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 06 09:37:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[73973]: 2025-12-06T09:37:42.970+0000 7fa609da7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 06 09:37:42 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 09:37:42 compute-0 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 09:37:43 compute-0 podman[74209]: 2025-12-06 09:37:43.063001907 +0000 UTC m=+0.136818391 container died 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89e6ac69aa7547a2e7e76bd4456bafe35b3ffc299c45a52b9e951d32ddc733e-merged.mount: Deactivated successfully.
Dec 06 09:37:43 compute-0 podman[74209]: 2025-12-06 09:37:43.108284004 +0000 UTC m=+0.182100478 container remove 5076c320e38e45a94f5fb7726329edcc2b8a7e5bff5175af100943d275cd2992 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:37:43 compute-0 bash[74209]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0
Dec 06 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 09:37:43 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0.service: Deactivated successfully.
Dec 06 09:37:43 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:43 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0.service: Consumed 1.237s CPU time.
Dec 06 09:37:43 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:37:43 compute-0 podman[74308]: 2025-12-06 09:37:43.60225621 +0000 UTC m=+0.063131175 container create 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73705ce395615cb903ff96d5cb9c4336d3b38c2937ff2ff8887e0b7d3ca3f43/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 podman[74308]: 2025-12-06 09:37:43.57637481 +0000 UTC m=+0.037249845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:43 compute-0 podman[74308]: 2025-12-06 09:37:43.682830499 +0000 UTC m=+0.143705534 container init 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:43 compute-0 podman[74308]: 2025-12-06 09:37:43.692008034 +0000 UTC m=+0.152883019 container start 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:37:43 compute-0 bash[74308]: 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d
Dec 06 09:37:43 compute-0 systemd[1]: Started Ceph mon.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:43 compute-0 ceph-mon[74327]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: pidfile_write: ignore empty --pid-file
Dec 06 09:37:43 compute-0 ceph-mon[74327]: load: jerasure load: lrc 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: RocksDB version: 7.9.2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Git sha 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: DB SUMMARY
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: DB Session ID:  4WBX5WA2U4DRQ0QUUFCR
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: CURRENT file:  CURRENT
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58735 ; 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                         Options.error_if_exists: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.create_if_missing: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                                     Options.env: 0x55fd97e60c20
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                                Options.info_log: 0x55fd9a54dac0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                              Options.statistics: (nil)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                               Options.use_fsync: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                              Options.db_log_dir: 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                                 Options.wal_dir: 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                    Options.write_buffer_manager: 0x55fd9a551900
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.unordered_write: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                               Options.row_cache: None
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                              Options.wal_filter: None
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.two_write_queues: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.wal_compression: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.atomic_flush: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.max_background_jobs: 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.max_background_compactions: -1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.max_subcompactions: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                          Options.max_open_files: -1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Compression algorithms supported:
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kZSTD supported: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kXpressCompression supported: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kBZip2Compression supported: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kLZ4Compression supported: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kZlibCompression supported: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         kSnappyCompression supported: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:           Options.merge_operator: 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:        Options.compaction_filter: None
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fd9a54caa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fd9a571350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.compression: NoCompression
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.num_levels: 7
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 423e8366-3852-4d2b-aa53-87abab31aff3
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863753736, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863761298, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56960, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54477, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013863, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013863761472, "job": 1, "event": "recovery_finished"}
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fd9a572e00
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: DB pointer 0x55fd9a67c000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:37:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 1.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 1.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 09:37:43 compute-0 ceph-mon[74327]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???) e1 preinit fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).mds e1 new map
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-06T09:37:41:285728+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 06 09:37:43 compute-0 ceph-mon[74327]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 06 09:37:43 compute-0 podman[74328]: 2025-12-06 09:37:43.799263924 +0000 UTC m=+0.060418112 container create b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 06 09:37:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 06 09:37:43 compute-0 systemd[1]: Started libpod-conmon-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope.
Dec 06 09:37:43 compute-0 podman[74328]: 2025-12-06 09:37:43.775662465 +0000 UTC m=+0.036816753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: monmap epoch 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:43 compute-0 ceph-mon[74327]: last_changed 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: created 2025-12-06T09:37:38.663870+0000
Dec 06 09:37:43 compute-0 ceph-mon[74327]: min_mon_release 19 (squid)
Dec 06 09:37:43 compute-0 ceph-mon[74327]: election_strategy: 1
Dec 06 09:37:43 compute-0 ceph-mon[74327]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:37:43 compute-0 ceph-mon[74327]: fsmap 
Dec 06 09:37:43 compute-0 ceph-mon[74327]: osdmap e1: 0 total, 0 up, 0 in
Dec 06 09:37:43 compute-0 ceph-mon[74327]: mgrmap e1: no daemons active
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:43 compute-0 podman[74328]: 2025-12-06 09:37:43.90856622 +0000 UTC m=+0.169720418 container init b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:43 compute-0 podman[74328]: 2025-12-06 09:37:43.918572217 +0000 UTC m=+0.179726405 container start b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:37:43 compute-0 podman[74328]: 2025-12-06 09:37:43.927053113 +0000 UTC m=+0.188207301 container attach b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:37:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 06 09:37:44 compute-0 systemd[1]: libpod-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope: Deactivated successfully.
Dec 06 09:37:44 compute-0 podman[74328]: 2025-12-06 09:37:44.191429934 +0000 UTC m=+0.452584132 container died b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb2eb74b68edc622ecbe8e8a0107af83324ff1c9aee09d8e3efe5044a040acc-merged.mount: Deactivated successfully.
Dec 06 09:37:44 compute-0 podman[74328]: 2025-12-06 09:37:44.255651237 +0000 UTC m=+0.516805455 container remove b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c (image=quay.io/ceph/ceph:v19, name=eloquent_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:37:44 compute-0 systemd[1]: libpod-conmon-b79cfbacb0aaed3d031f7f1dbb189636f64424f1772739fb2574a7af83c0e41c.scope: Deactivated successfully.
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.328614083 +0000 UTC m=+0.051323329 container create 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:37:44 compute-0 systemd[1]: Started libpod-conmon-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope.
Dec 06 09:37:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.301897511 +0000 UTC m=+0.024606757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.416802306 +0000 UTC m=+0.139511602 container init 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.427307345 +0000 UTC m=+0.150016591 container start 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.431684402 +0000 UTC m=+0.154393658 container attach 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:37:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 06 09:37:44 compute-0 systemd[1]: libpod-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope: Deactivated successfully.
Dec 06 09:37:44 compute-0 podman[74420]: 2025-12-06 09:37:44.712589744 +0000 UTC m=+0.435298980 container died 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-137489bf9721a2bc912490ef390fe1830c08a2ae02f3442f3b493f0eb1e59bfa-merged.mount: Deactivated successfully.
Dec 06 09:37:45 compute-0 podman[74420]: 2025-12-06 09:37:45.17336025 +0000 UTC m=+0.896069496 container remove 6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699 (image=quay.io/ceph/ceph:v19, name=pedantic_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 09:37:45 compute-0 systemd[1]: libpod-conmon-6107d8c7bbe9704cb093f5c0b71d684f71611f3205bb5573a4dc97abea77a699.scope: Deactivated successfully.
Dec 06 09:37:45 compute-0 systemd[1]: Reloading.
Dec 06 09:37:45 compute-0 systemd-rc-local-generator[74502]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:45 compute-0 systemd-sysv-generator[74506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:45 compute-0 systemd[1]: Reloading.
Dec 06 09:37:45 compute-0 systemd-rc-local-generator[74542]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:45 compute-0 systemd-sysv-generator[74546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:45 compute-0 systemd[1]: Starting Ceph mgr.compute-0.qhdjwa for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:37:46 compute-0 podman[74599]: 2025-12-06 09:37:46.221675265 +0000 UTC m=+0.083997003 container create 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:46 compute-0 podman[74599]: 2025-12-06 09:37:46.186895464 +0000 UTC m=+0.049217252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1415217fe1ab4fc45c3f2163d9ec7fdac44343257a341b013f55f3f758333a01/merged/var/lib/ceph/mgr/ceph-compute-0.qhdjwa supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 podman[74599]: 2025-12-06 09:37:46.309044803 +0000 UTC m=+0.171366561 container init 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:46 compute-0 podman[74599]: 2025-12-06 09:37:46.318338204 +0000 UTC m=+0.180659932 container start 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9 (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:37:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:46.493+0000 7ff0866c5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:37:46 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:37:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:46.576+0000 7ff0866c5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:37:46 compute-0 bash[74599]: 815d2c9c324f0034e21122d212c9b39b8cfbd265220b3170dc0ddc482fd85aa9
Dec 06 09:37:46 compute-0 systemd[1]: Started Ceph mgr.compute-0.qhdjwa for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:37:46 compute-0 podman[74639]: 2025-12-06 09:37:46.829159541 +0000 UTC m=+0.071440218 container create b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:37:46 compute-0 systemd[1]: Started libpod-conmon-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope.
Dec 06 09:37:46 compute-0 podman[74639]: 2025-12-06 09:37:46.797715937 +0000 UTC m=+0.039996674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:46 compute-0 podman[74639]: 2025-12-06 09:37:46.948117337 +0000 UTC m=+0.190398024 container init b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 09:37:46 compute-0 podman[74639]: 2025-12-06 09:37:46.960233864 +0000 UTC m=+0.202514511 container start b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:46 compute-0 podman[74639]: 2025-12-06 09:37:46.964098009 +0000 UTC m=+0.206378686 container attach b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 06 09:37:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723740387' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]: 
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]: {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "health": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "status": "HEALTH_OK",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "checks": {},
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "mutes": []
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "election_epoch": 5,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "quorum": [
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         0
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     ],
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "quorum_names": [
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "compute-0"
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     ],
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "quorum_age": 3,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "monmap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "epoch": 1,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "min_mon_release_name": "squid",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_mons": 1
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "osdmap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "epoch": 1,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_osds": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_up_osds": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "osd_up_since": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_in_osds": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "osd_in_since": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_remapped_pgs": 0
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "pgmap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "pgs_by_state": [],
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_pgs": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_pools": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_objects": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "data_bytes": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "bytes_used": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "bytes_avail": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "bytes_total": 0
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "fsmap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "epoch": 1,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "btime": "2025-12-06T09:37:41:285728+0000",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "by_rank": [],
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "up:standby": 0
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "mgrmap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "available": false,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "num_standbys": 0,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "modules": [
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:             "iostat",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:             "nfs",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:             "restful"
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         ],
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "services": {}
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "servicemap": {
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "epoch": 1,
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "modified": "2025-12-06T09:37:41.289249+0000",
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:         "services": {}
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     },
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]:     "progress_events": {}
Dec 06 09:37:47 compute-0 pensive_matsumoto[74653]: }
Dec 06 09:37:47 compute-0 systemd[1]: libpod-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope: Deactivated successfully.
Dec 06 09:37:47 compute-0 podman[74690]: 2025-12-06 09:37:47.229899006 +0000 UTC m=+0.024712364 container died b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:37:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2723740387' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf2844f23ba2d584e0ea59ab2a0fb3925aad142b587f370bb36d43abc82ef14d-merged.mount: Deactivated successfully.
Dec 06 09:37:47 compute-0 podman[74690]: 2025-12-06 09:37:47.26544217 +0000 UTC m=+0.060255508 container remove b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0 (image=quay.io/ceph/ceph:v19, name=pensive_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:47 compute-0 systemd[1]: libpod-conmon-b7f825cface5ee74d809808d15f14fb36e940b71b796662094b3388ebd4db7b0.scope: Deactivated successfully.
Dec 06 09:37:47 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:37:47 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:37:47 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:37:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:47.433+0000 7ff0866c5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:37:47 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.060+0000 7ff0866c5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.248+0000 7ff0866c5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.326+0000 7ff0866c5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:37:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:48.467+0000 7ff0866c5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:37:48 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.366766733 +0000 UTC m=+0.066914530 container create 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:37:49 compute-0 systemd[1]: Started libpod-conmon-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope.
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.330542104 +0000 UTC m=+0.030690011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.449+0000 7ff0866c5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.47253188 +0000 UTC m=+0.172679777 container init 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.479120899 +0000 UTC m=+0.179268736 container start 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.483869701 +0000 UTC m=+0.184017598 container attach 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.671+0000 7ff0866c5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 06 09:37:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75919033' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:49 compute-0 silly_newton[74721]: 
Dec 06 09:37:49 compute-0 silly_newton[74721]: {
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "health": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "status": "HEALTH_OK",
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "checks": {},
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "mutes": []
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "election_epoch": 5,
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "quorum": [
Dec 06 09:37:49 compute-0 silly_newton[74721]:         0
Dec 06 09:37:49 compute-0 silly_newton[74721]:     ],
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "quorum_names": [
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "compute-0"
Dec 06 09:37:49 compute-0 silly_newton[74721]:     ],
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "quorum_age": 5,
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "monmap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "epoch": 1,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "min_mon_release_name": "squid",
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_mons": 1
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "osdmap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "epoch": 1,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_osds": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_up_osds": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "osd_up_since": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_in_osds": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "osd_in_since": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_remapped_pgs": 0
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "pgmap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "pgs_by_state": [],
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_pgs": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_pools": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_objects": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "data_bytes": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "bytes_used": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "bytes_avail": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "bytes_total": 0
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "fsmap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "epoch": 1,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "btime": "2025-12-06T09:37:41:285728+0000",
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "by_rank": [],
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "up:standby": 0
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "mgrmap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "available": false,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "num_standbys": 0,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "modules": [
Dec 06 09:37:49 compute-0 silly_newton[74721]:             "iostat",
Dec 06 09:37:49 compute-0 silly_newton[74721]:             "nfs",
Dec 06 09:37:49 compute-0 silly_newton[74721]:             "restful"
Dec 06 09:37:49 compute-0 silly_newton[74721]:         ],
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "services": {}
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "servicemap": {
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "epoch": 1,
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "modified": "2025-12-06T09:37:41.289249+0000",
Dec 06 09:37:49 compute-0 silly_newton[74721]:         "services": {}
Dec 06 09:37:49 compute-0 silly_newton[74721]:     },
Dec 06 09:37:49 compute-0 silly_newton[74721]:     "progress_events": {}
Dec 06 09:37:49 compute-0 silly_newton[74721]: }
Dec 06 09:37:49 compute-0 systemd[1]: libpod-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope: Deactivated successfully.
Dec 06 09:37:49 compute-0 conmon[74721]: conmon 805b29a0a6981d54527d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope/container/memory.events
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.711588474 +0000 UTC m=+0.411736341 container died 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.747+0000 7ff0866c5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/75919033' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d171790ebdd09ee65b482d884665de15a01afb1a0fb0be49dec20c728d093e-merged.mount: Deactivated successfully.
Dec 06 09:37:49 compute-0 podman[74705]: 2025-12-06 09:37:49.799447631 +0000 UTC m=+0.499595468 container remove 805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96 (image=quay.io/ceph/ceph:v19, name=silly_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:37:49 compute-0 systemd[1]: libpod-conmon-805b29a0a6981d54527d7371f2661c621f398bc71d1537589bc01a99e7465c96.scope: Deactivated successfully.
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.818+0000 7ff0866c5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.900+0000 7ff0866c5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:49.968+0000 7ff0866c5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:37:49 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:37:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.302+0000 7ff0866c5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:37:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.391+0000 7ff0866c5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:37:50 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:37:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:50.826+0000 7ff0866c5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.376+0000 7ff0866c5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.446+0000 7ff0866c5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.527+0000 7ff0866c5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.669+0000 7ff0866c5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.732+0000 7ff0866c5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 podman[74761]: 2025-12-06 09:37:51.877914056 +0000 UTC m=+0.049044090 container create e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:37:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:51.891+0000 7ff0866c5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:37:51 compute-0 systemd[1]: Started libpod-conmon-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope.
Dec 06 09:37:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:51 compute-0 podman[74761]: 2025-12-06 09:37:51.857740192 +0000 UTC m=+0.028870246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:51 compute-0 podman[74761]: 2025-12-06 09:37:51.959030932 +0000 UTC m=+0.130160996 container init e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 09:37:51 compute-0 podman[74761]: 2025-12-06 09:37:51.971874953 +0000 UTC m=+0.143005027 container start e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 09:37:51 compute-0 podman[74761]: 2025-12-06 09:37:51.976654346 +0000 UTC m=+0.147784410 container attach e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:37:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.105+0000 7ff0866c5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156880945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:52 compute-0 adoring_burnell[74777]: 
Dec 06 09:37:52 compute-0 adoring_burnell[74777]: {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "health": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "status": "HEALTH_OK",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "checks": {},
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "mutes": []
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "election_epoch": 5,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "quorum": [
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         0
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     ],
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "quorum_names": [
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "compute-0"
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     ],
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "quorum_age": 8,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "monmap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "epoch": 1,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "min_mon_release_name": "squid",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_mons": 1
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "osdmap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "epoch": 1,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_osds": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_up_osds": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "osd_up_since": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_in_osds": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "osd_in_since": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_remapped_pgs": 0
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "pgmap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "pgs_by_state": [],
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_pgs": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_pools": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_objects": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "data_bytes": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "bytes_used": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "bytes_avail": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "bytes_total": 0
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "fsmap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "epoch": 1,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "btime": "2025-12-06T09:37:41:285728+0000",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "by_rank": [],
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "up:standby": 0
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "mgrmap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "available": false,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "num_standbys": 0,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "modules": [
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:             "iostat",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:             "nfs",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:             "restful"
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         ],
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "services": {}
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "servicemap": {
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "epoch": 1,
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "modified": "2025-12-06T09:37:41.289249+0000",
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:         "services": {}
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     },
Dec 06 09:37:52 compute-0 adoring_burnell[74777]:     "progress_events": {}
Dec 06 09:37:52 compute-0 adoring_burnell[74777]: }
Dec 06 09:37:52 compute-0 systemd[1]: libpod-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope: Deactivated successfully.
Dec 06 09:37:52 compute-0 podman[74761]: 2025-12-06 09:37:52.244408821 +0000 UTC m=+0.415538875 container died e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f02482d6fa160d70444dc56b743a8ab57b88d8f6ab9034a3eb194008b09c5e16-merged.mount: Deactivated successfully.
Dec 06 09:37:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4156880945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:52 compute-0 podman[74761]: 2025-12-06 09:37:52.287216198 +0000 UTC m=+0.458346242 container remove e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd (image=quay.io/ceph/ceph:v19, name=adoring_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:52 compute-0 systemd[1]: libpod-conmon-e9c1ba1739af8279e74fc7462151d2b70afb45ded34101bbdb7e0f7707470fdd.scope: Deactivated successfully.
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:37:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.362+0000 7ff0866c5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:52.427+0000 7ff0866c5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x559af7a969c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qhdjwa(active, starting, since 0.0100015s)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer INFO root] Starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:37:52
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [progress INFO root] Loading...
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [progress INFO root] No stored events to load
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded [] historic events
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 06 09:37:52 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec 06 09:37:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:53 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:37:53 compute-0 ceph-mon[74327]: mgrmap e2: compute-0.qhdjwa(active, starting, since 0.0100015s)
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:53 compute-0 ceph-mon[74327]: from='mgr.14102 192.168.122.100:0/3491436797' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:37:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qhdjwa(active, since 1.03065s)
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.39464137 +0000 UTC m=+0.071691502 container create 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:54 compute-0 systemd[1]: Started libpod-conmon-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope.
Dec 06 09:37:54 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:37:54 compute-0 ceph-mon[74327]: mgrmap e3: compute-0.qhdjwa(active, since 1.03065s)
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.367044631 +0000 UTC m=+0.044094883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qhdjwa(active, since 2s)
Dec 06 09:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.48876022 +0000 UTC m=+0.165810352 container init 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.495167815 +0000 UTC m=+0.172217947 container start 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.49899572 +0000 UTC m=+0.176045852 container attach 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 06 09:37:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286987495' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]: 
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]: {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "health": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "status": "HEALTH_OK",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "checks": {},
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "mutes": []
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "election_epoch": 5,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "quorum": [
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         0
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     ],
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "quorum_names": [
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "compute-0"
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     ],
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "quorum_age": 11,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "monmap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "epoch": 1,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "min_mon_release_name": "squid",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_mons": 1
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "osdmap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "epoch": 1,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_osds": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_up_osds": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "osd_up_since": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_in_osds": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "osd_in_since": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_remapped_pgs": 0
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "pgmap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "pgs_by_state": [],
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_pgs": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_pools": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_objects": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "data_bytes": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "bytes_used": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "bytes_avail": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "bytes_total": 0
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "fsmap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "epoch": 1,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "btime": "2025-12-06T09:37:41:285728+0000",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "by_rank": [],
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "up:standby": 0
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "mgrmap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "available": true,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "num_standbys": 0,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "modules": [
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:             "iostat",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:             "nfs",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:             "restful"
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         ],
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "services": {}
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "servicemap": {
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "epoch": 1,
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "modified": "2025-12-06T09:37:41.289249+0000",
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:         "services": {}
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     },
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]:     "progress_events": {}
Dec 06 09:37:54 compute-0 stupefied_taussig[74912]: }
Dec 06 09:37:54 compute-0 systemd[1]: libpod-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope: Deactivated successfully.
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.933321041 +0000 UTC m=+0.610371193 container died 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad605031841da3a341bedcf05ce9a210bc986ba9dbce21c83b654fc2a48e2641-merged.mount: Deactivated successfully.
Dec 06 09:37:54 compute-0 podman[74896]: 2025-12-06 09:37:54.981271778 +0000 UTC m=+0.658321900 container remove 966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3 (image=quay.io/ceph/ceph:v19, name=stupefied_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:54 compute-0 systemd[1]: libpod-conmon-966df63a2a28b8bbbdd31d464c46d9356f7be3e7be658151d0a87940c7ee4bd3.scope: Deactivated successfully.
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.067703628 +0000 UTC m=+0.056810192 container create e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:37:55 compute-0 systemd[1]: Started libpod-conmon-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope.
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.039999796 +0000 UTC m=+0.029106400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.164312076 +0000 UTC m=+0.153418650 container init e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.17164588 +0000 UTC m=+0.160752414 container start e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.176572996 +0000 UTC m=+0.165679530 container attach e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:37:55 compute-0 ceph-mon[74327]: mgrmap e4: compute-0.qhdjwa(active, since 2s)
Dec 06 09:37:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1286987495' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 09:37:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 06 09:37:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/684219841' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:37:55 compute-0 pensive_newton[74968]: 
Dec 06 09:37:55 compute-0 pensive_newton[74968]: [global]
Dec 06 09:37:55 compute-0 pensive_newton[74968]:         fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:37:55 compute-0 pensive_newton[74968]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 06 09:37:55 compute-0 systemd[1]: libpod-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope: Deactivated successfully.
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.574165448 +0000 UTC m=+0.563272022 container died e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-257c4c109da7f3b7b13b8f80524ff845a7a694b789ae5ffd583e994efa515fff-merged.mount: Deactivated successfully.
Dec 06 09:37:55 compute-0 podman[74951]: 2025-12-06 09:37:55.621705548 +0000 UTC m=+0.610812102 container remove e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32 (image=quay.io/ceph/ceph:v19, name=pensive_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:55 compute-0 systemd[1]: libpod-conmon-e41c36da110fe40f8df9c77bd1279d2942eb97cd416613dca342f703a0252f32.scope: Deactivated successfully.
Dec 06 09:37:55 compute-0 podman[75006]: 2025-12-06 09:37:55.681166591 +0000 UTC m=+0.038276720 container create 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:55 compute-0 systemd[1]: Started libpod-conmon-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope.
Dec 06 09:37:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:55 compute-0 podman[75006]: 2025-12-06 09:37:55.662167289 +0000 UTC m=+0.019277458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:55 compute-0 podman[75006]: 2025-12-06 09:37:55.777668597 +0000 UTC m=+0.134778766 container init 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec 06 09:37:55 compute-0 podman[75006]: 2025-12-06 09:37:55.786585561 +0000 UTC m=+0.143695720 container start 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:55 compute-0 podman[75006]: 2025-12-06 09:37:55.79057249 +0000 UTC m=+0.147682659 container attach 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:37:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 06 09:37:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:37:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec 06 09:37:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qhdjwa(active, since 4s)
Dec 06 09:37:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/684219841' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:37:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 06 09:37:56 compute-0 systemd[1]: libpod-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope: Deactivated successfully.
Dec 06 09:37:56 compute-0 podman[75006]: 2025-12-06 09:37:56.513401001 +0000 UTC m=+0.870511160 container died 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff7a32fddf080b5d996485a844d84732b3324a2b20818c13f4c6b74d94b4c0a-merged.mount: Deactivated successfully.
Dec 06 09:37:56 compute-0 podman[75006]: 2025-12-06 09:37:56.564209804 +0000 UTC m=+0.921319953 container remove 6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c (image=quay.io/ceph/ceph:v19, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:37:56 compute-0 systemd[1]: libpod-conmon-6312a3a8afbf5444c85e236c47cab6c4cb5a46ee1a81a56ca788d336bbfd656c.scope: Deactivated successfully.
Dec 06 09:37:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec 06 09:37:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:37:56 compute-0 podman[75059]: 2025-12-06 09:37:56.632302885 +0000 UTC m=+0.046426238 container create ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:37:56 compute-0 systemd[1]: Started libpod-conmon-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope.
Dec 06 09:37:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:56 compute-0 podman[75059]: 2025-12-06 09:37:56.611421117 +0000 UTC m=+0.025544480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:56 compute-0 podman[75059]: 2025-12-06 09:37:56.707095888 +0000 UTC m=+0.121219311 container init ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:56 compute-0 podman[75059]: 2025-12-06 09:37:56.716153235 +0000 UTC m=+0.130276588 container start ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:37:56 compute-0 podman[75059]: 2025-12-06 09:37:56.721455058 +0000 UTC m=+0.135578471 container attach ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:37:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:56.741+0000 7f8db8775140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:37:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:56.814+0000 7f8db8775140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:37:56 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:37:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 06 09:37:57 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148877063' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 09:37:57 compute-0 dazzling_ride[75095]: {
Dec 06 09:37:57 compute-0 dazzling_ride[75095]:     "epoch": 5,
Dec 06 09:37:57 compute-0 dazzling_ride[75095]:     "available": true,
Dec 06 09:37:57 compute-0 dazzling_ride[75095]:     "active_name": "compute-0.qhdjwa",
Dec 06 09:37:57 compute-0 dazzling_ride[75095]:     "num_standby": 0
Dec 06 09:37:57 compute-0 dazzling_ride[75095]: }
Dec 06 09:37:57 compute-0 systemd[1]: libpod-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope: Deactivated successfully.
Dec 06 09:37:57 compute-0 podman[75059]: 2025-12-06 09:37:57.165165683 +0000 UTC m=+0.579289036 container died ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0662175493b46420dd6a6f7bfd40e7b8835945637971112eaef1828dbbdd23c-merged.mount: Deactivated successfully.
Dec 06 09:37:57 compute-0 podman[75059]: 2025-12-06 09:37:57.214768223 +0000 UTC m=+0.628891576 container remove ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee (image=quay.io/ceph/ceph:v19, name=dazzling_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:57 compute-0 systemd[1]: libpod-conmon-ddcb1d8f1a907156ee919b5987a58dcd89d7eaec2786c9f886876557aeacddee.scope: Deactivated successfully.
Dec 06 09:37:57 compute-0 podman[75145]: 2025-12-06 09:37:57.277955568 +0000 UTC m=+0.041833599 container create f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:37:57 compute-0 systemd[1]: Started libpod-conmon-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope.
Dec 06 09:37:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:37:57 compute-0 podman[75145]: 2025-12-06 09:37:57.256925117 +0000 UTC m=+0.020803198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:37:57 compute-0 podman[75145]: 2025-12-06 09:37:57.369817464 +0000 UTC m=+0.133695535 container init f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:57 compute-0 podman[75145]: 2025-12-06 09:37:57.379466003 +0000 UTC m=+0.143344064 container start f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:37:57 compute-0 podman[75145]: 2025-12-06 09:37:57.384204436 +0000 UTC m=+0.148082547 container attach f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:37:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1328164209' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 06 09:37:57 compute-0 ceph-mon[74327]: mgrmap e5: compute-0.qhdjwa(active, since 4s)
Dec 06 09:37:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/148877063' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 09:37:57 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:37:57 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:37:57 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:37:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:57.622+0000 7f8db8775140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.242+0000 7f8db8775140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.391+0000 7f8db8775140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.456+0000 7f8db8775140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:37:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:58.585+0000 7f8db8775140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:37:58 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:37:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.526+0000 7f8db8775140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:37:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.727+0000 7f8db8775140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:37:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.808+0000 7f8db8775140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:37:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.869+0000 7f8db8775140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:37:59 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:37:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:37:59.941+0000 7f8db8775140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:38:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.005+0000 7f8db8775140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:38:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.353+0000 7f8db8775140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:38:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.446+0000 7f8db8775140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:38:00 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:38:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:00.864+0000 7f8db8775140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.429+0000 7f8db8775140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.502+0000 7f8db8775140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.588+0000 7f8db8775140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.747+0000 7f8db8775140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.823+0000 7f8db8775140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:38:01 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:38:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:01.989+0000 7f8db8775140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:38:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.221+0000 7f8db8775140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:38:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.508+0000 7f8db8775140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:38:02.576+0000 7f8db8775140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x562e1eddad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qhdjwa(active, starting, since 0.337067s)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Starting
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:38:02
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:38:02 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:38:02 compute-0 ceph-mon[74327]: osdmap e2: 0 total, 0 up, 0 in
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mgrmap e6: compute-0.qhdjwa(active, starting, since 0.337067s)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec 06 09:38:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:38:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:02 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [progress INFO root] Loading...
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [progress INFO root] No stored events to load
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded [] historic events
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 09:38:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:38:03 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec 06 09:38:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec 06 09:38:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec 06 09:38:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec 06 09:38:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec 06 09:38:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931811 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qhdjwa(active, since 1.34916s)
Dec 06 09:38:03 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 06 09:38:03 compute-0 silly_margulis[75161]: {
Dec 06 09:38:03 compute-0 silly_margulis[75161]:     "mgrmap_epoch": 7,
Dec 06 09:38:03 compute-0 silly_margulis[75161]:     "initialized": true
Dec 06 09:38:03 compute-0 silly_margulis[75161]: }
Dec 06 09:38:03 compute-0 systemd[1]: libpod-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope: Deactivated successfully.
Dec 06 09:38:03 compute-0 podman[75145]: 2025-12-06 09:38:03.971833375 +0000 UTC m=+6.735711446 container died f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:03 compute-0 ceph-mon[74327]: Found migration_current of "None". Setting to last migration.
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:38:03 compute-0 ceph-mon[74327]: mgrmap e7: compute-0.qhdjwa(active, since 1.34916s)
Dec 06 09:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44773ff3210dfaaae43e31831c3f7db4ad37ebabcdc2f8f4b262a4a191bfd35-merged.mount: Deactivated successfully.
Dec 06 09:38:04 compute-0 podman[75145]: 2025-12-06 09:38:04.023560187 +0000 UTC m=+6.787438258 container remove f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db (image=quay.io/ceph/ceph:v19, name=silly_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:04 compute-0 systemd[1]: libpod-conmon-f2a3408573970b8cd326fdde75bfb8317ce0a612cfa07943ffd2c4133afa33db.scope: Deactivated successfully.
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.120261617 +0000 UTC m=+0.060967303 container create 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:38:04 compute-0 systemd[1]: Started libpod-conmon-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope.
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.092604956 +0000 UTC m=+0.033310692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.221020397 +0000 UTC m=+0.161726123 container init 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.231043353 +0000 UTC m=+0.171749039 container start 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.235207924 +0000 UTC m=+0.175913690 container attach 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec 06 09:38:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec 06 09:38:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:04 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 06 09:38:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:38:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:04 compute-0 systemd[1]: libpod-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope: Deactivated successfully.
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.692232559 +0000 UTC m=+0.632938215 container died 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c4ab5675b98f1dc2ae993ec1053c6516f3ff52798f258667ddc7b4be6fee54a-merged.mount: Deactivated successfully.
Dec 06 09:38:04 compute-0 podman[75310]: 2025-12-06 09:38:04.732921835 +0000 UTC m=+0.673627491 container remove 91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac (image=quay.io/ceph/ceph:v19, name=sweet_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:38:04 compute-0 systemd[1]: libpod-conmon-91c31f855bcb2ae536e1e113d275ccdf15c6ef87d5eea174ac7db6d0183aa5ac.scope: Deactivated successfully.
Dec 06 09:38:04 compute-0 podman[75365]: 2025-12-06 09:38:04.812152334 +0000 UTC m=+0.055686130 container create 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:38:04 compute-0 systemd[1]: Started libpod-conmon-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope.
Dec 06 09:38:04 compute-0 podman[75365]: 2025-12-06 09:38:04.781160768 +0000 UTC m=+0.024694624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:04 compute-0 podman[75365]: 2025-12-06 09:38:04.914587257 +0000 UTC m=+0.158121043 container init 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:38:04 compute-0 podman[75365]: 2025-12-06 09:38:04.921260667 +0000 UTC m=+0.164794453 container start 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:38:04 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:04 compute-0 podman[75365]: 2025-12-06 09:38:04.926911367 +0000 UTC m=+0.170445163 container attach 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qhdjwa(active, since 2s)
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 06 09:38:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_user
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 06 09:38:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 06 09:38:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_config
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 06 09:38:05 compute-0 nifty_euler[75381]: ssh user set to ceph-admin. sudo will be used
Dec 06 09:38:05 compute-0 systemd[1]: libpod-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope: Deactivated successfully.
Dec 06 09:38:05 compute-0 podman[75365]: 2025-12-06 09:38:05.348541171 +0000 UTC m=+0.592074927 container died 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-41db95901aa15bd2936c55b7d8a51b136b1d682f3a8c34931cdd152823e892e8-merged.mount: Deactivated successfully.
Dec 06 09:38:05 compute-0 podman[75365]: 2025-12-06 09:38:05.389034092 +0000 UTC m=+0.632567848 container remove 98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643 (image=quay.io/ceph/ceph:v19, name=nifty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec 06 09:38:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:38:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:05 compute-0 systemd[1]: libpod-conmon-98d6f667dc0a19fe242f691f0c97301c1a0cf15eb6307009deb3a237e05e3643.scope: Deactivated successfully.
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.491954694 +0000 UTC m=+0.069126833 container create 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:05 compute-0 systemd[1]: Started libpod-conmon-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope.
Dec 06 09:38:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.467393334 +0000 UTC m=+0.044565553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.566064753 +0000 UTC m=+0.143236972 container init 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.571448498 +0000 UTC m=+0.148620667 container start 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.575651111 +0000 UTC m=+0.152823290 container attach 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 06 09:38:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: [cephadm INFO root] Set ssh private key
Dec 06 09:38:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 06 09:38:05 compute-0 systemd[1]: libpod-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope: Deactivated successfully.
Dec 06 09:38:05 compute-0 podman[75441]: 2025-12-06 09:38:05.95877032 +0000 UTC m=+0.535942479 container died 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe34958966a58e42a6dda41775600efca8ab728c80121bbd9608695dfa85550-merged.mount: Deactivated successfully.
Dec 06 09:38:06 compute-0 podman[75441]: 2025-12-06 09:38:06.010679705 +0000 UTC m=+0.587851874 container remove 67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225 (image=quay.io/ceph/ceph:v19, name=dazzling_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:38:06 compute-0 systemd[1]: libpod-conmon-67c0d5b680db9f1c493e7b3619c89c66c65b58d700e7c9fbfa9323803a7d8225.scope: Deactivated successfully.
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.096140616 +0000 UTC m=+0.058165208 container create 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:06 compute-0 systemd[1]: Started libpod-conmon-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope.
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.081335266 +0000 UTC m=+0.043359878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.190630703 +0000 UTC m=+0.152655345 container init 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.206016294 +0000 UTC m=+0.168040886 container start 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.209439891 +0000 UTC m=+0.171464483 container attach 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:06 compute-0 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Bus STARTING
Dec 06 09:38:06 compute-0 ceph-mon[74327]: mgrmap e8: compute-0.qhdjwa(active, since 2s)
Dec 06 09:38:06 compute-0 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:38:06 compute-0 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Client ('192.168.122.100', 46222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:06 compute-0 ceph-mon[74327]: Set ssh ssh_user
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:06 compute-0 ceph-mon[74327]: Set ssh ssh_config
Dec 06 09:38:06 compute-0 ceph-mon[74327]: ssh user set to ceph-admin. sudo will be used
Dec 06 09:38:06 compute-0 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:38:06 compute-0 ceph-mon[74327]: [06/Dec/2025:09:38:05] ENGINE Bus STARTED
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:06 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 06 09:38:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:06 compute-0 ceph-mgr[74618]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 06 09:38:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 06 09:38:06 compute-0 systemd[1]: libpod-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope: Deactivated successfully.
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.594759094 +0000 UTC m=+0.556783746 container died 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-18827f65d8368e4a7c3ed41f505e5dd5c9852840c112612421305c8d63b55b76-merged.mount: Deactivated successfully.
Dec 06 09:38:06 compute-0 podman[75497]: 2025-12-06 09:38:06.634716166 +0000 UTC m=+0.596740808 container remove 814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547 (image=quay.io/ceph/ceph:v19, name=vibrant_elbakyan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:06 compute-0 systemd[1]: libpod-conmon-814b8f210940771d0705b2183463df6fbb7e5f8da21bdc1836703edeb782f547.scope: Deactivated successfully.
Dec 06 09:38:06 compute-0 podman[75552]: 2025-12-06 09:38:06.734746471 +0000 UTC m=+0.064221767 container create 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:06 compute-0 systemd[1]: Started libpod-conmon-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope.
Dec 06 09:38:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:06 compute-0 podman[75552]: 2025-12-06 09:38:06.71422466 +0000 UTC m=+0.043699946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:06 compute-0 podman[75552]: 2025-12-06 09:38:06.823645649 +0000 UTC m=+0.153120925 container init 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:06 compute-0 podman[75552]: 2025-12-06 09:38:06.832636084 +0000 UTC m=+0.162111360 container start 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:38:06 compute-0 podman[75552]: 2025-12-06 09:38:06.836063452 +0000 UTC m=+0.165538758 container attach 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:38:06 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:07 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:07 compute-0 affectionate_lichterman[75569]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr6+qXxL7AUoz6da9uYOaWBQVg93dmp6B4R2YfuW7AUrOvPCB5ME9ViFnWrnivTbTxvEEoK75W+01vhVovMJYBez4JJzeN+FwqLcHALLyaRKfnHPJBnd9vk1AKqgh05Mcv8diCcMCRdRYgNXDJS0/hZ6tFAM3/YFu07KsgsGgP86KG8dqKEzKvWEiXpg63wz4g1JufT5u5vePJ15cRiWA0NyjEQgHmrLrv02lvP7Tz/y0+h4GWHaHuIjMXfdG56OkCx1NM/QEyHmEGheBwcbg874x1+nt7wMtMGZ1QatviZ6fxs5OK5qqiLu3aBnJMmEa124CRz1/L8fxSFeTlARBG6jr95DSRCQOFWvONY/yVCv5LN+HDHDzQKdK4qdMcpZW0dbifaJuCkEE0iIgei1ExA86w8d1Zo22xnHOgN3FYcS/LbMtn8yIyX6oaNhmuu6wgNe/k9LP28whRIH5x+Xj3U79uE0bKko6M8x6zVM2tkT9pt3zRH8Fyz/Trklu/GI8= zuul@controller
Dec 06 09:38:07 compute-0 systemd[1]: libpod-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope: Deactivated successfully.
Dec 06 09:38:07 compute-0 podman[75552]: 2025-12-06 09:38:07.202198509 +0000 UTC m=+0.531673825 container died 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60c49a5543c038e2c7ad25a3b155f75b6b5c01ff4cb7dd8a5003aae17d69ccc-merged.mount: Deactivated successfully.
Dec 06 09:38:07 compute-0 podman[75552]: 2025-12-06 09:38:07.2472207 +0000 UTC m=+0.576695966 container remove 8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570 (image=quay.io/ceph/ceph:v19, name=affectionate_lichterman, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:07 compute-0 systemd[1]: libpod-conmon-8c3104280474c18fe6f962bbfeb800ede60094402ecdab72b9312c98d647c570.scope: Deactivated successfully.
Dec 06 09:38:07 compute-0 podman[75607]: 2025-12-06 09:38:07.34438396 +0000 UTC m=+0.068323867 container create 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec 06 09:38:07 compute-0 systemd[1]: Started libpod-conmon-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope.
Dec 06 09:38:07 compute-0 podman[75607]: 2025-12-06 09:38:07.308689431 +0000 UTC m=+0.032629408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:07 compute-0 podman[75607]: 2025-12-06 09:38:07.428746339 +0000 UTC m=+0.152686276 container init 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:38:07 compute-0 podman[75607]: 2025-12-06 09:38:07.434297027 +0000 UTC m=+0.158236934 container start 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:07 compute-0 podman[75607]: 2025-12-06 09:38:07.438370047 +0000 UTC m=+0.162309974 container attach 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec 06 09:38:07 compute-0 ceph-mon[74327]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:07 compute-0 ceph-mon[74327]: Set ssh ssh_identity_key
Dec 06 09:38:07 compute-0 ceph-mon[74327]: Set ssh private key
Dec 06 09:38:07 compute-0 ceph-mon[74327]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:07 compute-0 ceph-mon[74327]: Set ssh ssh_identity_pub
Dec 06 09:38:07 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:07 compute-0 sshd-session[75649]: Accepted publickey for ceph-admin from 192.168.122.100 port 47908 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:07 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 09:38:08 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 09:38:08 compute-0 systemd-logind[795]: New session 21 of user ceph-admin.
Dec 06 09:38:08 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 09:38:08 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 06 09:38:08 compute-0 systemd[75653]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:08 compute-0 sshd-session[75666]: Accepted publickey for ceph-admin from 192.168.122.100 port 47912 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:08 compute-0 systemd-logind[795]: New session 23 of user ceph-admin.
Dec 06 09:38:08 compute-0 systemd[75653]: Queued start job for default target Main User Target.
Dec 06 09:38:08 compute-0 systemd[75653]: Created slice User Application Slice.
Dec 06 09:38:08 compute-0 systemd[75653]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:38:08 compute-0 systemd[75653]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:38:08 compute-0 systemd[75653]: Reached target Paths.
Dec 06 09:38:08 compute-0 systemd[75653]: Reached target Timers.
Dec 06 09:38:08 compute-0 systemd[75653]: Starting D-Bus User Message Bus Socket...
Dec 06 09:38:08 compute-0 systemd[75653]: Starting Create User's Volatile Files and Directories...
Dec 06 09:38:08 compute-0 systemd[75653]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:38:08 compute-0 systemd[75653]: Reached target Sockets.
Dec 06 09:38:08 compute-0 systemd[75653]: Finished Create User's Volatile Files and Directories.
Dec 06 09:38:08 compute-0 systemd[75653]: Reached target Basic System.
Dec 06 09:38:08 compute-0 systemd[75653]: Reached target Main User Target.
Dec 06 09:38:08 compute-0 systemd[75653]: Startup finished in 177ms.
Dec 06 09:38:08 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 06 09:38:08 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 06 09:38:08 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 06 09:38:08 compute-0 sshd-session[75649]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:08 compute-0 sshd-session[75666]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:08 compute-0 sudo[75674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:08 compute-0 sudo[75674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:08 compute-0 sudo[75674]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:08 compute-0 ceph-mon[74327]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:08 compute-0 sshd-session[75699]: Accepted publickey for ceph-admin from 192.168.122.100 port 47924 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:08 compute-0 systemd-logind[795]: New session 24 of user ceph-admin.
Dec 06 09:38:08 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 06 09:38:08 compute-0 sshd-session[75699]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053159 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:08 compute-0 sudo[75703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 06 09:38:08 compute-0 sudo[75703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:08 compute-0 sudo[75703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:08 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:09 compute-0 sshd-session[75728]: Accepted publickey for ceph-admin from 192.168.122.100 port 47940 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:09 compute-0 systemd-logind[795]: New session 25 of user ceph-admin.
Dec 06 09:38:09 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 06 09:38:09 compute-0 sshd-session[75728]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:09 compute-0 sudo[75732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 06 09:38:09 compute-0 sudo[75732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:09 compute-0 sudo[75732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:09 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 06 09:38:09 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 06 09:38:09 compute-0 sshd-session[75757]: Accepted publickey for ceph-admin from 192.168.122.100 port 47944 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:09 compute-0 systemd-logind[795]: New session 26 of user ceph-admin.
Dec 06 09:38:09 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 06 09:38:09 compute-0 sshd-session[75757]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:09 compute-0 ceph-mon[74327]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:09 compute-0 sudo[75761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:09 compute-0 sudo[75761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:09 compute-0 sudo[75761]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:09 compute-0 sshd-session[75786]: Accepted publickey for ceph-admin from 192.168.122.100 port 47960 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:09 compute-0 systemd-logind[795]: New session 27 of user ceph-admin.
Dec 06 09:38:09 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 06 09:38:09 compute-0 sshd-session[75786]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:10 compute-0 sudo[75790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:10 compute-0 sudo[75790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:10 compute-0 sudo[75790]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:10 compute-0 sshd-session[75815]: Accepted publickey for ceph-admin from 192.168.122.100 port 47974 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:10 compute-0 systemd-logind[795]: New session 28 of user ceph-admin.
Dec 06 09:38:10 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 06 09:38:10 compute-0 sshd-session[75815]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:10 compute-0 sudo[75819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 06 09:38:10 compute-0 sudo[75819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:10 compute-0 sudo[75819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:10 compute-0 ceph-mon[74327]: Deploying cephadm binary to compute-0
Dec 06 09:38:10 compute-0 sshd-session[75844]: Accepted publickey for ceph-admin from 192.168.122.100 port 47976 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:10 compute-0 systemd-logind[795]: New session 29 of user ceph-admin.
Dec 06 09:38:10 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 06 09:38:10 compute-0 sshd-session[75844]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:10 compute-0 sudo[75848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:10 compute-0 sudo[75848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:10 compute-0 sudo[75848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:10 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:11 compute-0 sshd-session[75873]: Accepted publickey for ceph-admin from 192.168.122.100 port 47992 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:11 compute-0 systemd-logind[795]: New session 30 of user ceph-admin.
Dec 06 09:38:11 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 06 09:38:11 compute-0 sshd-session[75873]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:11 compute-0 sudo[75877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 06 09:38:11 compute-0 sudo[75877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:11 compute-0 sudo[75877]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:11 compute-0 sshd-session[75902]: Accepted publickey for ceph-admin from 192.168.122.100 port 48006 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:11 compute-0 systemd-logind[795]: New session 31 of user ceph-admin.
Dec 06 09:38:11 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 06 09:38:11 compute-0 sshd-session[75902]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:12 compute-0 sshd-session[75929]: Accepted publickey for ceph-admin from 192.168.122.100 port 38678 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:12 compute-0 systemd-logind[795]: New session 32 of user ceph-admin.
Dec 06 09:38:12 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 06 09:38:12 compute-0 sshd-session[75929]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:12 compute-0 sudo[75933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 06 09:38:12 compute-0 sudo[75933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:12 compute-0 sudo[75933]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:12 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:12 compute-0 sshd-session[75958]: Accepted publickey for ceph-admin from 192.168.122.100 port 38686 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:38:13 compute-0 systemd-logind[795]: New session 33 of user ceph-admin.
Dec 06 09:38:13 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 06 09:38:13 compute-0 sshd-session[75958]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:38:13 compute-0 sudo[75962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 06 09:38:13 compute-0 sudo[75962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:13 compute-0 sudo[75962]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:13 compute-0 ceph-mgr[74618]: [cephadm INFO root] Added host compute-0
Dec 06 09:38:13 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 06 09:38:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:38:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:13 compute-0 eager_euler[75623]: Added host 'compute-0' with addr '192.168.122.100'
Dec 06 09:38:13 compute-0 systemd[1]: libpod-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope: Deactivated successfully.
Dec 06 09:38:13 compute-0 sudo[76008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:13 compute-0 podman[76014]: 2025-12-06 09:38:13.666077341 +0000 UTC m=+0.044292777 container died 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:13 compute-0 sudo[76008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:13 compute-0 sudo[76008]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a25b6de5f301cb0fcd7a5d4d2d8e890530ec6fcf70ec524ba31f91af5e95fc7-merged.mount: Deactivated successfully.
Dec 06 09:38:13 compute-0 podman[76014]: 2025-12-06 09:38:13.704369929 +0000 UTC m=+0.082585355 container remove 0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9 (image=quay.io/ceph/ceph:v19, name=eager_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:13 compute-0 systemd[1]: libpod-conmon-0166f750aaa5328129183a58393b68de1a207788f811cea924f1d5dfb0ef10d9.scope: Deactivated successfully.
Dec 06 09:38:13 compute-0 sudo[76048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Dec 06 09:38:13 compute-0 sudo[76048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:13 compute-0 podman[76071]: 2025-12-06 09:38:13.786817641 +0000 UTC m=+0.050443197 container create 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:13 compute-0 systemd[1]: Started libpod-conmon-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope.
Dec 06 09:38:13 compute-0 podman[76071]: 2025-12-06 09:38:13.767095266 +0000 UTC m=+0.030720852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:13 compute-0 podman[76071]: 2025-12-06 09:38:13.912786234 +0000 UTC m=+0.176411860 container init 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:38:13 compute-0 podman[76071]: 2025-12-06 09:38:13.923643736 +0000 UTC m=+0.187269292 container start 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:13 compute-0 podman[76071]: 2025-12-06 09:38:13.927852368 +0000 UTC m=+0.191477994 container attach 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 09:38:14 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:14 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 06 09:38:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 06 09:38:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:38:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:14 compute-0 flamboyant_austin[76089]: Scheduled mon update...
Dec 06 09:38:14 compute-0 systemd[1]: libpod-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope: Deactivated successfully.
Dec 06 09:38:14 compute-0 podman[76071]: 2025-12-06 09:38:14.360566348 +0000 UTC m=+0.624191924 container died 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7522fb92c249476623c24ea53ce87b160c057d81dd5ae9993aa6aac8e3ef7fbb-merged.mount: Deactivated successfully.
Dec 06 09:38:14 compute-0 podman[76071]: 2025-12-06 09:38:14.410894762 +0000 UTC m=+0.674520348 container remove 135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72 (image=quay.io/ceph/ceph:v19, name=flamboyant_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:38:14 compute-0 systemd[1]: libpod-conmon-135fccb239d270d9d53e525c0f3d5224fe9bbe180a18967ad347b26c9f7ffa72.scope: Deactivated successfully.
Dec 06 09:38:14 compute-0 podman[76106]: 2025-12-06 09:38:14.509768845 +0000 UTC m=+0.512615613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:14 compute-0 podman[76152]: 2025-12-06 09:38:14.537067779 +0000 UTC m=+0.092529740 container create d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:38:14 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:14 compute-0 ceph-mon[74327]: Added host compute-0
Dec 06 09:38:14 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:38:14 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:14 compute-0 systemd[1]: Started libpod-conmon-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope.
Dec 06 09:38:14 compute-0 podman[76152]: 2025-12-06 09:38:14.490448777 +0000 UTC m=+0.045910748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:14 compute-0 podman[76152]: 2025-12-06 09:38:14.651774652 +0000 UTC m=+0.207236653 container init d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:14 compute-0 podman[76152]: 2025-12-06 09:38:14.666503899 +0000 UTC m=+0.221965850 container start d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:14 compute-0 podman[76152]: 2025-12-06 09:38:14.670347324 +0000 UTC m=+0.225809295 container attach d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.694327523 +0000 UTC m=+0.075273692 container create 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 09:38:14 compute-0 systemd[1]: Started libpod-conmon-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope.
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.664304306 +0000 UTC m=+0.045250525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.792949141 +0000 UTC m=+0.173895280 container init 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.79801947 +0000 UTC m=+0.178965629 container start 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.802253833 +0000 UTC m=+0.183200062 container attach 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:38:14 compute-0 sleepy_brattain[76199]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec 06 09:38:14 compute-0 systemd[1]: libpod-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope: Deactivated successfully.
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.89569569 +0000 UTC m=+0.276641849 container died 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8615dc0d120f5a91f38ae865b06dfd38f7b69726079a9ef1236dd8ce2e64a4c-merged.mount: Deactivated successfully.
Dec 06 09:38:14 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:14 compute-0 podman[76179]: 2025-12-06 09:38:14.950610283 +0000 UTC m=+0.331556452 container remove 6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a (image=quay.io/ceph/ceph:v19, name=sleepy_brattain, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:38:14 compute-0 systemd[1]: libpod-conmon-6283aab014fbfd3345d5d32b881501ad2458068ab7734ea47bccf81a79ff611a.scope: Deactivated successfully.
Dec 06 09:38:15 compute-0 sudo[76048]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 06 09:38:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 06 09:38:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:38:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 trusting_ganguly[76177]: Scheduled mgr update...
Dec 06 09:38:15 compute-0 sudo[76234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:15 compute-0 sudo[76234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:15 compute-0 sudo[76234]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:15 compute-0 systemd[1]: libpod-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope: Deactivated successfully.
Dec 06 09:38:15 compute-0 podman[76152]: 2025-12-06 09:38:15.109381538 +0000 UTC m=+0.664843449 container died d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e357bfceab149a8ccd4310569d78b451b725eb6f576b16bf1cec9c0aa1cbf004-merged.mount: Deactivated successfully.
Dec 06 09:38:15 compute-0 podman[76152]: 2025-12-06 09:38:15.157467247 +0000 UTC m=+0.712929198 container remove d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5 (image=quay.io/ceph/ceph:v19, name=trusting_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:38:15 compute-0 sudo[76262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 06 09:38:15 compute-0 systemd[1]: libpod-conmon-d7d42ae0f200ce4a30d7131cbff1715998d271a60b83c60cd3a8a6c62c4b21b5.scope: Deactivated successfully.
Dec 06 09:38:15 compute-0 sudo[76262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.223680072 +0000 UTC m=+0.048053900 container create 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:38:15 compute-0 systemd[1]: Started libpod-conmon-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope.
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.203362874 +0000 UTC m=+0.027736682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.330067242 +0000 UTC m=+0.154441060 container init 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.341240081 +0000 UTC m=+0.165613909 container start 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.345758228 +0000 UTC m=+0.170132056 container attach 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:15 compute-0 sudo[76262]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 sudo[76357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:15 compute-0 sudo[76357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:15 compute-0 sudo[76357]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:15 compute-0 ceph-mon[74327]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:15 compute-0 ceph-mon[74327]: Saving service mon spec with placement count:5
Dec 06 09:38:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 sudo[76382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:38:15 compute-0 sudo[76382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service crash spec with placement *
Dec 06 09:38:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 06 09:38:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:38:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:15 compute-0 friendly_bose[76314]: Scheduled crash update...
Dec 06 09:38:15 compute-0 systemd[1]: libpod-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope: Deactivated successfully.
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.766186488 +0000 UTC m=+0.590560336 container died 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ce5aa46063f04cb2cd3c7003cfd366ee94b11ee56d3e5191c8ae0dea90e0dcd-merged.mount: Deactivated successfully.
Dec 06 09:38:15 compute-0 podman[76296]: 2025-12-06 09:38:15.814782539 +0000 UTC m=+0.639156337 container remove 2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba (image=quay.io/ceph/ceph:v19, name=friendly_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:38:15 compute-0 systemd[1]: libpod-conmon-2fab9783974b608e6ec2820772e1342231a500c6da9cbdc7a04db2eaa82ba9ba.scope: Deactivated successfully.
Dec 06 09:38:15 compute-0 podman[76420]: 2025-12-06 09:38:15.895363703 +0000 UTC m=+0.052081318 container create 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 09:38:15 compute-0 systemd[1]: Started libpod-conmon-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope.
Dec 06 09:38:15 compute-0 podman[76420]: 2025-12-06 09:38:15.879067625 +0000 UTC m=+0.035785260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:16 compute-0 podman[76420]: 2025-12-06 09:38:16.003301184 +0000 UTC m=+0.160018859 container init 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:38:16 compute-0 podman[76420]: 2025-12-06 09:38:16.012801679 +0000 UTC m=+0.169519304 container start 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:38:16 compute-0 podman[76420]: 2025-12-06 09:38:16.016463941 +0000 UTC m=+0.173181606 container attach 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:16 compute-0 podman[76531]: 2025-12-06 09:38:16.320720879 +0000 UTC m=+0.080039455 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 06 09:38:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619669888' entity='client.admin' 
Dec 06 09:38:16 compute-0 systemd[1]: libpod-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope: Deactivated successfully.
Dec 06 09:38:16 compute-0 podman[76531]: 2025-12-06 09:38:16.420020351 +0000 UTC m=+0.179338877 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:16 compute-0 podman[76420]: 2025-12-06 09:38:16.439788277 +0000 UTC m=+0.596505942 container died 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8902bf25bbaa6f4e0d44f7733f52228cfd6dd31890f0a675898cfc18407ea039-merged.mount: Deactivated successfully.
Dec 06 09:38:16 compute-0 podman[76420]: 2025-12-06 09:38:16.49311075 +0000 UTC m=+0.649828395 container remove 02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f (image=quay.io/ceph/ceph:v19, name=funny_carson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:38:16 compute-0 systemd[1]: libpod-conmon-02b7b50d1bde203122cf90c68f7fa3f5bb271104a0ec0d3c17f6e1a08fbc3c2f.scope: Deactivated successfully.
Dec 06 09:38:16 compute-0 podman[76579]: 2025-12-06 09:38:16.578329425 +0000 UTC m=+0.060549654 container create 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:16 compute-0 ceph-mon[74327]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:16 compute-0 ceph-mon[74327]: Saving service mgr spec with placement count:2
Dec 06 09:38:16 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2619669888' entity='client.admin' 
Dec 06 09:38:16 compute-0 sudo[76382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:16 compute-0 systemd[1]: Started libpod-conmon-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope.
Dec 06 09:38:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:16 compute-0 podman[76579]: 2025-12-06 09:38:16.54786142 +0000 UTC m=+0.030081709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:16 compute-0 podman[76579]: 2025-12-06 09:38:16.666080122 +0000 UTC m=+0.148300331 container init 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:38:16 compute-0 podman[76579]: 2025-12-06 09:38:16.676461664 +0000 UTC m=+0.158681863 container start 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:16 compute-0 podman[76579]: 2025-12-06 09:38:16.680001794 +0000 UTC m=+0.162221993 container attach 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:38:16 compute-0 sudo[76611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:16 compute-0 sudo[76611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:16 compute-0 sudo[76611]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:16 compute-0 sudo[76638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:38:16 compute-0 sudo[76638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:16 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:16 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76694 (sysctl)
Dec 06 09:38:17 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 06 09:38:17 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 06 09:38:17 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 06 09:38:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:17 compute-0 systemd[1]: libpod-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope: Deactivated successfully.
Dec 06 09:38:17 compute-0 podman[76579]: 2025-12-06 09:38:17.118819682 +0000 UTC m=+0.601039901 container died 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-acd91377ddd6c61d1f843e66761c7ad3781bbc1962b9e39de8b7fa7a24012ba3-merged.mount: Deactivated successfully.
Dec 06 09:38:17 compute-0 podman[76579]: 2025-12-06 09:38:17.217735477 +0000 UTC m=+0.699955666 container remove 00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586 (image=quay.io/ceph/ceph:v19, name=jolly_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec 06 09:38:17 compute-0 systemd[1]: libpod-conmon-00c0917a1ebbbab2fbd69083c13648b6d5b53e36da30d75ee2c21f6501aa0586.scope: Deactivated successfully.
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.284897689 +0000 UTC m=+0.046762785 container create ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:38:17 compute-0 systemd[1]: Started libpod-conmon-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope.
Dec 06 09:38:17 compute-0 sudo[76638]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.26343692 +0000 UTC m=+0.025302056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.390178868 +0000 UTC m=+0.152043984 container init ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.404892125 +0000 UTC m=+0.166757211 container start ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.409630688 +0000 UTC m=+0.171495864 container attach ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:17 compute-0 sudo[76749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:17 compute-0 sudo[76749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:17 compute-0 sudo[76749]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:17 compute-0 sudo[76776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 06 09:38:17 compute-0 sudo[76776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:17 compute-0 ceph-mon[74327]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:17 compute-0 ceph-mon[74327]: Saving service crash spec with placement *
Dec 06 09:38:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:17 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:17 compute-0 ceph-mgr[74618]: [cephadm INFO root] Added label _admin to host compute-0
Dec 06 09:38:17 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 06 09:38:17 compute-0 thirsty_joliot[76746]: Added label _admin to host compute-0
Dec 06 09:38:17 compute-0 systemd[1]: libpod-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope: Deactivated successfully.
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.844902107 +0000 UTC m=+0.606767223 container died ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b00e53b083dd17efcde211565a234f8a3da26e47584768b1983508d8e40fcf43-merged.mount: Deactivated successfully.
Dec 06 09:38:17 compute-0 podman[76717]: 2025-12-06 09:38:17.897517486 +0000 UTC m=+0.659382612 container remove ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281 (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:17 compute-0 sudo[76776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:17 compute-0 systemd[1]: libpod-conmon-ea158ab14833d9811b1be989120c86bc6cbb3aa40c81c42f6dfc8b41ec265281.scope: Deactivated successfully.
Dec 06 09:38:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:17 compute-0 podman[76853]: 2025-12-06 09:38:17.996035692 +0000 UTC m=+0.065135774 container create e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:38:18 compute-0 sudo[76855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:18 compute-0 sudo[76855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:18 compute-0 sudo[76855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:18 compute-0 systemd[1]: Started libpod-conmon-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope.
Dec 06 09:38:18 compute-0 podman[76853]: 2025-12-06 09:38:17.969568215 +0000 UTC m=+0.038668357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:18 compute-0 podman[76853]: 2025-12-06 09:38:18.09767082 +0000 UTC m=+0.166770962 container init e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:18 compute-0 sudo[76894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- inventory --format=json-pretty --filter-for-batch
Dec 06 09:38:18 compute-0 podman[76853]: 2025-12-06 09:38:18.107568123 +0000 UTC m=+0.176668205 container start e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:38:18 compute-0 sudo[76894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:18 compute-0 podman[76853]: 2025-12-06 09:38:18.111611632 +0000 UTC m=+0.180711714 container attach e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:18 compute-0 podman[76983]: 2025-12-06 09:38:18.484155576 +0000 UTC m=+0.028287005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:38:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 06 09:38:18 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:19 compute-0 sshd-session[76998]: Connection closed by 43.163.93.82 port 33200
Dec 06 09:38:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:19 compute-0 podman[76983]: 2025-12-06 09:38:19.565018939 +0000 UTC m=+1.109150318 container create de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 06 09:38:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1599237284' entity='client.admin' 
Dec 06 09:38:19 compute-0 silly_williamson[76896]: set mgr/dashboard/cluster/status
Dec 06 09:38:19 compute-0 systemd[1]: libpod-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope: Deactivated successfully.
Dec 06 09:38:20 compute-0 ceph-mon[74327]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:20 compute-0 ceph-mon[74327]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:20 compute-0 ceph-mon[74327]: Added label _admin to host compute-0
Dec 06 09:38:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:20 compute-0 systemd[1]: Started libpod-conmon-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope.
Dec 06 09:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:20 compute-0 podman[76983]: 2025-12-06 09:38:20.151293701 +0000 UTC m=+1.695425090 container init de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 09:38:20 compute-0 podman[76853]: 2025-12-06 09:38:20.153454803 +0000 UTC m=+2.222554895 container died e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:38:20 compute-0 podman[76983]: 2025-12-06 09:38:20.163154123 +0000 UTC m=+1.707285472 container start de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:38:20 compute-0 modest_shockley[77013]: 167 167
Dec 06 09:38:20 compute-0 podman[76983]: 2025-12-06 09:38:20.170775672 +0000 UTC m=+1.714907051 container attach de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:20 compute-0 systemd[1]: libpod-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope: Deactivated successfully.
Dec 06 09:38:20 compute-0 podman[76983]: 2025-12-06 09:38:20.172003076 +0000 UTC m=+1.716134495 container died de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3cc80dd3b5423747eee8b2dd692df86bb9fcf0f3a9d6c1b83c734c2b7a4418c-merged.mount: Deactivated successfully.
Dec 06 09:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f273593ec0c5c125b28d4ab6b9662ecd4d14b1752acaf4aa516f0177722e697b-merged.mount: Deactivated successfully.
Dec 06 09:38:20 compute-0 podman[76983]: 2025-12-06 09:38:20.229634123 +0000 UTC m=+1.773765502 container remove de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:20 compute-0 systemd[1]: libpod-conmon-de2369fad071a4318f380ba366b7746ffc7b79dcc072568fcb2ec7bd3b03731c.scope: Deactivated successfully.
Dec 06 09:38:20 compute-0 podman[76853]: 2025-12-06 09:38:20.250043362 +0000 UTC m=+2.319143454 container remove e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466 (image=quay.io/ceph/ceph:v19, name=silly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:20 compute-0 systemd[1]: libpod-conmon-e7595c2ca63877939ab883c0e99fcfae09cbf26e9ad0afa3ab752619edbaf466.scope: Deactivated successfully.
Dec 06 09:38:20 compute-0 sudo[73275]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:20 compute-0 podman[77040]: 2025-12-06 09:38:20.513720186 +0000 UTC m=+0.072214243 container create c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:38:20 compute-0 systemd[1]: Started libpod-conmon-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope.
Dec 06 09:38:20 compute-0 podman[77040]: 2025-12-06 09:38:20.487510534 +0000 UTC m=+0.046004681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:38:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:20 compute-0 podman[77040]: 2025-12-06 09:38:20.615957555 +0000 UTC m=+0.174451682 container init c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:38:20 compute-0 podman[77040]: 2025-12-06 09:38:20.630053491 +0000 UTC m=+0.188547578 container start c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:20 compute-0 podman[77040]: 2025-12-06 09:38:20.6346336 +0000 UTC m=+0.193127687 container attach c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:38:20 compute-0 sudo[77084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtdevcsagisfeiozdrmnldhvvvcatsxt ; /usr/bin/python3'
Dec 06 09:38:20 compute-0 sudo[77084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:20 compute-0 python3[77086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:20 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:38:20 compute-0 podman[77091]: 2025-12-06 09:38:20.960186064 +0000 UTC m=+0.056243540 container create 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:21 compute-0 systemd[1]: Started libpod-conmon-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope.
Dec 06 09:38:21 compute-0 podman[77091]: 2025-12-06 09:38:20.928385543 +0000 UTC m=+0.024443139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:21 compute-0 podman[77091]: 2025-12-06 09:38:21.066003533 +0000 UTC m=+0.162061059 container init 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1599237284' entity='client.admin' 
Dec 06 09:38:21 compute-0 podman[77091]: 2025-12-06 09:38:21.078966147 +0000 UTC m=+0.175023623 container start 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:38:21 compute-0 podman[77091]: 2025-12-06 09:38:21.08272144 +0000 UTC m=+0.178778906 container attach 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942879497' entity='client.admin' 
Dec 06 09:38:21 compute-0 systemd[1]: libpod-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope: Deactivated successfully.
Dec 06 09:38:21 compute-0 priceless_feynman[77056]: [
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:     {
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "available": false,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "being_replaced": false,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "ceph_device_lvm": false,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "lsm_data": {},
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "lvs": [],
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "path": "/dev/sr0",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "rejected_reasons": [
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "Has a FileSystem",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "Insufficient space (<5GB)"
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         ],
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         "sys_api": {
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "actuators": null,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "device_nodes": [
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:                 "sr0"
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             ],
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "devname": "sr0",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "human_readable_size": "482.00 KB",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "id_bus": "ata",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "model": "QEMU DVD-ROM",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "nr_requests": "2",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "parent": "/dev/sr0",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "partitions": {},
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "path": "/dev/sr0",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "removable": "1",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "rev": "2.5+",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "ro": "0",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "rotational": "1",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "sas_address": "",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "sas_device_handle": "",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "scheduler_mode": "mq-deadline",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "sectors": 0,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "sectorsize": "2048",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "size": 493568.0,
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "support_discard": "2048",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "type": "disk",
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:             "vendor": "QEMU"
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:         }
Dec 06 09:38:21 compute-0 priceless_feynman[77056]:     }
Dec 06 09:38:21 compute-0 priceless_feynman[77056]: ]
Dec 06 09:38:21 compute-0 podman[78120]: 2025-12-06 09:38:21.566000348 +0000 UTC m=+0.030983157 container died 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 09:38:21 compute-0 systemd[1]: libpod-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope: Deactivated successfully.
Dec 06 09:38:21 compute-0 podman[77040]: 2025-12-06 09:38:21.576720948 +0000 UTC m=+1.135215035 container died c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0712903dfffcecaba63226260143e2bb53420b1b55529c40bc27ddae7f729f3-merged.mount: Deactivated successfully.
Dec 06 09:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-06b9dce1518cc29b31df1e3f14e77a3241f334e69f1ce9e25d34806422819da1-merged.mount: Deactivated successfully.
Dec 06 09:38:21 compute-0 podman[78120]: 2025-12-06 09:38:21.61418319 +0000 UTC m=+0.079166029 container remove 0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f (image=quay.io/ceph/ceph:v19, name=nervous_shirley, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:38:21 compute-0 systemd[1]: libpod-conmon-0d5ae9171f71a66a2b06c83624fc1607fcaa69df90c717ce535b6023cef3bc1f.scope: Deactivated successfully.
Dec 06 09:38:21 compute-0 podman[77040]: 2025-12-06 09:38:21.634104179 +0000 UTC m=+1.192598266 container remove c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feynman, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:21 compute-0 sudo[77084]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:21 compute-0 systemd[1]: libpod-conmon-c1e8a232087105e49b4d7d25f8f3f3a5dd432097b567250f3b8aaa3baee664bb.scope: Deactivated successfully.
Dec 06 09:38:21 compute-0 sudo[76894]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:38:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:21 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:38:21 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:38:21 compute-0 sudo[78295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:38:21 compute-0 sudo[78295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:21 compute-0 sudo[78295]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:21 compute-0 sudo[78320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:38:21 compute-0 sudo[78320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:21 compute-0 sudo[78320]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78345]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:22 compute-0 sudo[78397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78397]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78447]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78518]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1942879497' entity='client.admin' 
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:22 compute-0 sudo[78591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 09:38:22 compute-0 sudo[78591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78591]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:38:22 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:38:22 compute-0 sudo[78640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:38:22 compute-0 sudo[78640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcnhtasnyxqgxtymqyllwpnfxytqqme ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013902.0274796-37079-11437068814984/async_wrapper.py j894321523705 30 /home/zuul/.ansible/tmp/ansible-tmp-1765013902.0274796-37079-11437068814984/AnsiballZ_command.py _'
Dec 06 09:38:22 compute-0 sudo[78690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:22 compute-0 sudo[78691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:38:22 compute-0 sudo[78691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78691]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 ansible-async_wrapper.py[78700]: Invoked with j894321523705 30 /home/zuul/.ansible/tmp/ansible-tmp-1765013902.0274796-37079-11437068814984/AnsiballZ_command.py _
Dec 06 09:38:22 compute-0 ansible-async_wrapper.py[78768]: Starting module and watcher
Dec 06 09:38:22 compute-0 ansible-async_wrapper.py[78768]: Start watching 78769 (30)
Dec 06 09:38:22 compute-0 ansible-async_wrapper.py[78769]: Start module (78769)
Dec 06 09:38:22 compute-0 ansible-async_wrapper.py[78700]: Return async_wrapper task started.
Dec 06 09:38:22 compute-0 sudo[78743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:22 compute-0 sudo[78743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78690]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 sudo[78773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:38:22 compute-0 sudo[78773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:22 compute-0 sudo[78773]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:22 compute-0 ceph-mgr[74618]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 06 09:38:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 06 09:38:23 compute-0 python3[78770]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:23 compute-0 sudo[78821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:38:23 compute-0 sudo[78821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[78821]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.068524621 +0000 UTC m=+0.049302385 container create 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 09:38:23 compute-0 systemd[1]: Started libpod-conmon-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope.
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.047236374 +0000 UTC m=+0.028014148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:23 compute-0 sudo[78859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:38:23 compute-0 sudo[78859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:23 compute-0 sudo[78859]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.172039255 +0000 UTC m=+0.152817089 container init 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.180368108 +0000 UTC m=+0.161145862 container start 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.184832095 +0000 UTC m=+0.165609889 container attach 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:38:23 compute-0 sudo[78889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:38:23 compute-0 sudo[78889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[78889]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:38:23 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:38:23 compute-0 sudo[78915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:38:23 compute-0 sudo[78915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[78915]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[78959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:38:23 compute-0 sudo[78959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[78959]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[78984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:38:23 compute-0 sudo[78984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[78984]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[79009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:23 compute-0 sudo[79009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79009]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[79034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:38:23 compute-0 sudo[79034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79034]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:38:23 compute-0 interesting_chatelet[78882]: 
Dec 06 09:38:23 compute-0 interesting_chatelet[78882]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 09:38:23 compute-0 systemd[1]: libpod-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope: Deactivated successfully.
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.548983625 +0000 UTC m=+0.529761439 container died 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:38:23 compute-0 sudo[79093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:38:23 compute-0 sudo[79093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79093]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[79120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:38:23 compute-0 sudo[79120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79120]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:38:23 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:38:23 compute-0 ceph-mon[74327]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 06 09:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-200bb43861020f192ae49c17b3ae495bd935e1df3973fe1382f394801c7bc23b-merged.mount: Deactivated successfully.
Dec 06 09:38:23 compute-0 sudo[79145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 09:38:23 compute-0 podman[78834]: 2025-12-06 09:38:23.797739947 +0000 UTC m=+0.778517741 container remove 88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8 (image=quay.io/ceph/ceph:v19, name=interesting_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:38:23 compute-0 sudo[79145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79145]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 systemd[1]: libpod-conmon-88d87c04022f598d85acc9edc66f37be07a503bf97731d08939617525f00c0c8.scope: Deactivated successfully.
Dec 06 09:38:23 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:38:23 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:38:23 compute-0 ansible-async_wrapper.py[78769]: Module complete (78769)
Dec 06 09:38:23 compute-0 sudo[79171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:38:23 compute-0 sudo[79171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79171]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:23 compute-0 sudo[79197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:38:23 compute-0 sudo[79197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:23 compute-0 sudo[79197]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:38:24 compute-0 sudo[79244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:24 compute-0 sudo[79269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79269]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyacqvngwkybjizkkggicbprucatxusp ; /usr/bin/python3'
Dec 06 09:38:24 compute-0 sudo[79317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:24 compute-0 sudo[79318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:38:24 compute-0 sudo[79318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79318]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 python3[79327]: ansible-ansible.legacy.async_status Invoked with jid=j894321523705.78700 mode=status _async_dir=/root/.ansible_async
Dec 06 09:38:24 compute-0 sudo[79317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:24 compute-0 sudo[79368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:38:24 compute-0 sudo[79368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79368]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:38:24 compute-0 sudo[79393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79393]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:38:24 compute-0 sudo[79444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqbrnrcdsppssteqqaiylapocpxxgiwy ; /usr/bin/python3'
Dec 06 09:38:24 compute-0 sudo[79444]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1))
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:38:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:24 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 06 09:38:24 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 06 09:38:24 compute-0 sudo[79492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:24 compute-0 sudo[79492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 sudo[79492]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 python3[79491]: ansible-ansible.legacy.async_status Invoked with jid=j894321523705.78700 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 09:38:24 compute-0 sudo[79487]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:24 compute-0 sudo[79517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:24 compute-0 sudo[79517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:24 compute-0 ceph-mon[74327]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:24 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:38:24 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:38:24 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:24 compute-0 ceph-mon[74327]: Deploying daemon crash.compute-0 on compute-0
Dec 06 09:38:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:25 compute-0 sudo[79588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetbkmjfwkhedwhgdonnqvwbgvvdhutt ; /usr/bin/python3'
Dec 06 09:38:25 compute-0 sudo[79588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.128837151 +0000 UTC m=+0.062815559 container create 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:38:25 compute-0 systemd[1]: Started libpod-conmon-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope.
Dec 06 09:38:25 compute-0 python3[79596]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 09:38:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.101906604 +0000 UTC m=+0.035885102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.205680323 +0000 UTC m=+0.139658751 container init 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:38:25 compute-0 sudo[79588]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.214775301 +0000 UTC m=+0.148753709 container start 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.218064695 +0000 UTC m=+0.152043143 container attach 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:38:25 compute-0 lucid_newton[79625]: 167 167
Dec 06 09:38:25 compute-0 systemd[1]: libpod-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope: Deactivated successfully.
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.220093105 +0000 UTC m=+0.154071553 container died 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-28fa271157de2fcce9f7f4aafbfaf62f094f06a4bbb0dce06e5e54abc1c29d68-merged.mount: Deactivated successfully.
Dec 06 09:38:25 compute-0 podman[79608]: 2025-12-06 09:38:25.264471452 +0000 UTC m=+0.198449860 container remove 833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:25 compute-0 systemd[1]: libpod-conmon-833ed5ff7ff1258bf3878da05700d128ee711a83b319f383455388e85baa83c7.scope: Deactivated successfully.
Dec 06 09:38:25 compute-0 systemd[1]: Reloading.
Dec 06 09:38:25 compute-0 systemd-rc-local-generator[79673]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:38:25 compute-0 systemd-sysv-generator[79679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:38:25 compute-0 sudo[79707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrzmfqkovlnzzbptvprogwmyxqzxygjz ; /usr/bin/python3'
Dec 06 09:38:25 compute-0 sudo[79707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:25 compute-0 systemd[1]: Reloading.
Dec 06 09:38:25 compute-0 systemd-sysv-generator[79743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:38:25 compute-0 systemd-rc-local-generator[79739]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:38:25 compute-0 python3[79711]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:25 compute-0 ceph-mon[74327]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:25 compute-0 podman[79747]: 2025-12-06 09:38:25.823682346 +0000 UTC m=+0.060229359 container create 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:25 compute-0 systemd[1]: Started libpod-conmon-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope.
Dec 06 09:38:25 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:38:25 compute-0 podman[79747]: 2025-12-06 09:38:25.791063468 +0000 UTC m=+0.027610561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:25 compute-0 podman[79747]: 2025-12-06 09:38:25.914028222 +0000 UTC m=+0.150575245 container init 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:25 compute-0 podman[79747]: 2025-12-06 09:38:25.925304222 +0000 UTC m=+0.161851235 container start 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:25 compute-0 podman[79747]: 2025-12-06 09:38:25.929202978 +0000 UTC m=+0.165750001 container attach 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 09:38:26 compute-0 podman[79835]: 2025-12-06 09:38:26.216766873 +0000 UTC m=+0.066789692 container create aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a38ece6b245c2a764254ee70099ad7f8266ee0c43aca1a4471a6fc5e4985f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 podman[79835]: 2025-12-06 09:38:26.280352038 +0000 UTC m=+0.130374887 container init aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:26 compute-0 podman[79835]: 2025-12-06 09:38:26.193333428 +0000 UTC m=+0.043356297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:38:26 compute-0 podman[79835]: 2025-12-06 09:38:26.286996115 +0000 UTC m=+0.137018934 container start aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:26 compute-0 bash[79835]: aa22500c4f14e1b782cb19f95006facaf1989e4bc9c84e60fe7f7e18e984493f
Dec 06 09:38:26 compute-0 systemd[1]: Started Ceph crash.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:38:26 compute-0 sudo[79517]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:38:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:38:26 compute-0 optimistic_sutherland[79765]: 
Dec 06 09:38:26 compute-0 optimistic_sutherland[79765]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1))
Dec 06 09:38:26 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 4a51f2f8-7a09-415b-a5f4-d025247f5419 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:38:26 compute-0 systemd[1]: libpod-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope: Deactivated successfully.
Dec 06 09:38:26 compute-0 podman[79747]: 2025-12-06 09:38:26.386270162 +0000 UTC m=+0.622817175 container died 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:38:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dbe7a641880a86280bcf17763495ea15c22467a73183d26fb3efc1d2a745dcb-merged.mount: Deactivated successfully.
Dec 06 09:38:26 compute-0 podman[79747]: 2025-12-06 09:38:26.430337517 +0000 UTC m=+0.666884520 container remove 478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f (image=quay.io/ceph/ceph:v19, name=optimistic_sutherland, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 09:38:26 compute-0 systemd[1]: libpod-conmon-478b5cb7a38424535c776f6a933caa338cd6060e2d7b0684356883524b506c3f.scope: Deactivated successfully.
Dec 06 09:38:26 compute-0 sudo[79707]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:26 compute-0 sudo[79866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:38:26 compute-0 sudo[79866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:26 compute-0 sudo[79866]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.493+0000 7fc7dd97d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.493+0000 7fc7dd97d640 -1 AuthRegistry(0x7fc7d80698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.494+0000 7fc7dd97d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.494+0000 7fc7dd97d640 -1 AuthRegistry(0x7fc7dd97bff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.495+0000 7fc7d6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: 2025-12-06T09:38:26.495+0000 7fc7dd97d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 06 09:38:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 06 09:38:26 compute-0 sudo[79905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:26 compute-0 sudo[79905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:26 compute-0 sudo[79905]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:26 compute-0 sudo[79930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:38:26 compute-0 sudo[79930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:26 compute-0 sudo[79978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgfqossbzhtzgegnyzuitmmiiexchsim ; /usr/bin/python3'
Dec 06 09:38:26 compute-0 sudo[79978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:26 compute-0 python3[79980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:26 compute-0 podman[80006]: 2025-12-06 09:38:26.939589516 +0000 UTC m=+0.039581686 container create 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:26 compute-0 systemd[1]: Started libpod-conmon-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope.
Dec 06 09:38:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:27.009039718 +0000 UTC m=+0.109031898 container init 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:27.014315389 +0000 UTC m=+0.114307549 container start 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:26.920564649 +0000 UTC m=+0.020556829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:27.017999557 +0000 UTC m=+0.117991727 container attach 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:38:27 compute-0 podman[80092]: 2025-12-06 09:38:27.222171291 +0000 UTC m=+0.055052929 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:27 compute-0 podman[80092]: 2025-12-06 09:38:27.34589835 +0000 UTC m=+0.178780018 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/426016775' entity='client.admin' 
Dec 06 09:38:27 compute-0 systemd[1]: libpod-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope: Deactivated successfully.
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:27.413924063 +0000 UTC m=+0.513916233 container died 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 09:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc13d9c12d07fbf4876ab6434eee0bf202b6fc2f931d342e84abbe2420d2f96-merged.mount: Deactivated successfully.
Dec 06 09:38:27 compute-0 podman[80006]: 2025-12-06 09:38:27.46252695 +0000 UTC m=+0.562519150 container remove 609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933 (image=quay.io/ceph/ceph:v19, name=sharp_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:38:27 compute-0 systemd[1]: libpod-conmon-609b08cdc56da331b10a49e1ae35088ce3a44c8940047c2f1c2e6c3c8fcee933.scope: Deactivated successfully.
Dec 06 09:38:27 compute-0 sudo[79978]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:27 compute-0 sudo[79930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 sudo[80195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtxrhxppmfemlgzqsdygouxksofkxmxc ; /usr/bin/python3'
Dec 06 09:38:27 compute-0 sudo[80195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:27 compute-0 sudo[80196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:38:27 compute-0 sudo[80196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:27 compute-0 sudo[80196]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 09:38:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:38:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:38:27 compute-0 python3[80201]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:27 compute-0 ansible-async_wrapper.py[78768]: Done in kid B.
Dec 06 09:38:27 compute-0 sudo[80223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:27 compute-0 sudo[80223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:27 compute-0 sudo[80223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:27 compute-0 podman[80243]: 2025-12-06 09:38:27.908791859 +0000 UTC m=+0.055658435 container create 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 09:38:27 compute-0 systemd[1]: Started libpod-conmon-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope.
Dec 06 09:38:27 compute-0 sudo[80259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:27 compute-0 sudo[80259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:27 compute-0 podman[80243]: 2025-12-06 09:38:27.886983377 +0000 UTC m=+0.033849953 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 1 completed events
Dec 06 09:38:28 compute-0 podman[80243]: 2025-12-06 09:38:28.01045951 +0000 UTC m=+0.157326096 container init 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 podman[80243]: 2025-12-06 09:38:28.021679389 +0000 UTC m=+0.168545935 container start 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:28 compute-0 podman[80243]: 2025-12-06 09:38:28.025579763 +0000 UTC m=+0.172446299 container attach 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.377691541 +0000 UTC m=+0.067304475 container create d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/426016775' entity='client.admin' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 systemd[1]: Started libpod-conmon-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope.
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.350321741 +0000 UTC m=+0.039934705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3562087741' entity='client.admin' 
Dec 06 09:38:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.46801093 +0000 UTC m=+0.157623844 container init d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.472866879 +0000 UTC m=+0.162479783 container start d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.476692072 +0000 UTC m=+0.166304986 container attach d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:38:28 compute-0 clever_snyder[80345]: 167 167
Dec 06 09:38:28 compute-0 systemd[1]: libpod-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope: Deactivated successfully.
Dec 06 09:38:28 compute-0 conmon[80345]: conmon d75a2ad5e03ce26641b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope/container/memory.events
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.479418955 +0000 UTC m=+0.169031889 container died d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:28 compute-0 systemd[1]: libpod-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope: Deactivated successfully.
Dec 06 09:38:28 compute-0 podman[80243]: 2025-12-06 09:38:28.489900653 +0000 UTC m=+0.636767229 container died 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-887376de203f97c8c70cb78eb36d24285d147b004af347a4a412a35343be5383-merged.mount: Deactivated successfully.
Dec 06 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1e2be58a6aa5c4dfcdb576667bfb1d992a2c4792b675d96936b00150c738b1-merged.mount: Deactivated successfully.
Dec 06 09:38:28 compute-0 podman[80327]: 2025-12-06 09:38:28.545776813 +0000 UTC m=+0.235389757 container remove d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511 (image=quay.io/ceph/ceph:v19, name=clever_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:38:28 compute-0 systemd[1]: libpod-conmon-d75a2ad5e03ce26641b63f94c36bb5bafad99904bfe45913278c9cd4f03aa511.scope: Deactivated successfully.
Dec 06 09:38:28 compute-0 podman[80243]: 2025-12-06 09:38:28.565291384 +0000 UTC m=+0.712157940 container remove 8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b (image=quay.io/ceph/ceph:v19, name=intelligent_volhard, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:38:28 compute-0 sudo[80195]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:28 compute-0 sudo[80259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:28 compute-0 systemd[1]: libpod-conmon-8dd440dfb13fc43c05e9d26599bfb5c281ff45a55b0e0f438ed429db804f6b9b.scope: Deactivated successfully.
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:38:28 compute-0 sudo[80376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:28 compute-0 sudo[80376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:28 compute-0 sudo[80376]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:28 compute-0 sudo[80401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:38:28 compute-0 sudo[80401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:28 compute-0 sudo[80449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didfxkhipndqtgngureoppwgibzabtym ; /usr/bin/python3'
Dec 06 09:38:28 compute-0 sudo[80449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:29 compute-0 python3[80451]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:29 compute-0 podman[80466]: 2025-12-06 09:38:29.128033579 +0000 UTC m=+0.078054443 container create cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.167318456 +0000 UTC m=+0.074781665 container create 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 09:38:29 compute-0 systemd[1]: Started libpod-conmon-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope.
Dec 06 09:38:29 compute-0 podman[80466]: 2025-12-06 09:38:29.09469199 +0000 UTC m=+0.044712904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:29 compute-0 systemd[1]: Started libpod-conmon-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope.
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.127598487 +0000 UTC m=+0.035061746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:29 compute-0 podman[80466]: 2025-12-06 09:38:29.228131318 +0000 UTC m=+0.178152172 container init cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:38:29 compute-0 podman[80466]: 2025-12-06 09:38:29.234146648 +0000 UTC m=+0.184167502 container start cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:29 compute-0 podman[80466]: 2025-12-06 09:38:29.23836856 +0000 UTC m=+0.188389394 container attach cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:38:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.26273806 +0000 UTC m=+0.170201229 container init 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.271877144 +0000 UTC m=+0.179340313 container start 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.275391668 +0000 UTC m=+0.182854837 container attach 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:38:29 compute-0 elegant_rubin[80502]: 167 167
Dec 06 09:38:29 compute-0 systemd[1]: libpod-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope: Deactivated successfully.
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.278284065 +0000 UTC m=+0.185747274 container died 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4964552541d83f3c136d219ca6d3865ad225f064a9cb88d54569c63b7e166c-merged.mount: Deactivated successfully.
Dec 06 09:38:29 compute-0 podman[80474]: 2025-12-06 09:38:29.326995524 +0000 UTC m=+0.234458733 container remove 46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265 (image=quay.io/ceph/ceph:v19, name=elegant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:38:29 compute-0 systemd[1]: libpod-conmon-46057f1a2d689292344272a787894870949a980e5fc98a24216e805ba083b265.scope: Deactivated successfully.
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:29 compute-0 sudo[80401]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 09:38:29 compute-0 ceph-mon[74327]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3562087741' entity='client.admin' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: Reconfiguring mgr.compute-0.qhdjwa (unknown last config time)...
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:29 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:29 compute-0 sudo[80525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:38:29 compute-0 sudo[80525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:29 compute-0 sudo[80525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 06 09:38:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 06 09:38:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 06 09:38:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:38:30 compute-0 ceph-mon[74327]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 06 09:38:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 06 09:38:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 06 09:38:30 compute-0 condescending_jemison[80494]: set require_min_compat_client to mimic
Dec 06 09:38:30 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 06 09:38:30 compute-0 systemd[1]: libpod-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope: Deactivated successfully.
Dec 06 09:38:30 compute-0 podman[80466]: 2025-12-06 09:38:30.659540024 +0000 UTC m=+1.609560908 container died cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d927b3ec0898bcabfe9e949aa81c1c2854bcbc5b04487a465803cae913963e-merged.mount: Deactivated successfully.
Dec 06 09:38:30 compute-0 podman[80466]: 2025-12-06 09:38:30.709677681 +0000 UTC m=+1.659698545 container remove cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99 (image=quay.io/ceph/ceph:v19, name=condescending_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:30 compute-0 systemd[1]: libpod-conmon-cf8dcee66b3c22257191c57199bd86f9eee8f1fef86a8dc4b8b3fb5eda1aec99.scope: Deactivated successfully.
Dec 06 09:38:30 compute-0 sudo[80449]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:31 compute-0 sudo[80602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjcxjjdflvpjnqvyblfwjhggtnpvomi ; /usr/bin/python3'
Dec 06 09:38:31 compute-0 sudo[80602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:31 compute-0 python3[80604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:31 compute-0 podman[80605]: 2025-12-06 09:38:31.446901408 +0000 UTC m=+0.044427305 container create 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 09:38:31 compute-0 systemd[1]: Started libpod-conmon-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope.
Dec 06 09:38:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:31 compute-0 podman[80605]: 2025-12-06 09:38:31.429392302 +0000 UTC m=+0.026918179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:31 compute-0 podman[80605]: 2025-12-06 09:38:31.537406671 +0000 UTC m=+0.134932628 container init 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:31 compute-0 podman[80605]: 2025-12-06 09:38:31.547026328 +0000 UTC m=+0.144552215 container start 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:38:31 compute-0 podman[80605]: 2025-12-06 09:38:31.551431465 +0000 UTC m=+0.148957332 container attach 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2845166347' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 06 09:38:31 compute-0 ceph-mon[74327]: osdmap e3: 0 total, 0 up, 0 in
Dec 06 09:38:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:31 compute-0 sudo[80644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:31 compute-0 sudo[80644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:31 compute-0 sudo[80644]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:32 compute-0 sudo[80669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Dec 06 09:38:32 compute-0 sudo[80669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:32 compute-0 sudo[80669]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mgr[74618]: [cephadm INFO root] Added host compute-0
Dec 06 09:38:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:38:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 sudo[80715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:38:32 compute-0 sudo[80715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:38:32 compute-0 sudo[80715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:32 compute-0 ceph-mon[74327]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:38:32 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:38:33 compute-0 ceph-mon[74327]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:38:33 compute-0 ceph-mon[74327]: Added host compute-0
Dec 06 09:38:33 compute-0 ceph-mon[74327]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 06 09:38:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 06 09:38:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:34 compute-0 ceph-mon[74327]: Deploying cephadm binary to compute-1
Dec 06 09:38:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:35 compute-0 ceph-mon[74327]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:37 compute-0 ceph-mon[74327]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:38 compute-0 ceph-mgr[74618]: [cephadm INFO root] Added host compute-1
Dec 06 09:38:38 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 06 09:38:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:38:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:39 compute-0 ceph-mon[74327]: Added host compute-1
Dec 06 09:38:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:38:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:39 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 06 09:38:39 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 06 09:38:40 compute-0 ceph-mon[74327]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:38:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:41 compute-0 ceph-mon[74327]: Deploying cephadm binary to compute-2
Dec 06 09:38:41 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:42 compute-0 ceph-mon[74327]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 09:38:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Added host compute-2
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:38:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:38:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 06 09:38:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Added host 'compute-0' with addr '192.168.122.100'
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Added host 'compute-1' with addr '192.168.122.101'
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Added host 'compute-2' with addr '192.168.122.102'
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Scheduled mon update...
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Scheduled mgr update...
Dec 06 09:38:43 compute-0 practical_pascal[80620]: Scheduled osd.default_drive_group update...
Dec 06 09:38:43 compute-0 systemd[1]: libpod-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope: Deactivated successfully.
Dec 06 09:38:43 compute-0 podman[80605]: 2025-12-06 09:38:43.901728115 +0000 UTC m=+12.499253992 container died 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 09:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6dedaf827b65f469a2686e9984db333c89b99e6fa5e4f2c419d00e1b3b34d9-merged.mount: Deactivated successfully.
Dec 06 09:38:43 compute-0 podman[80605]: 2025-12-06 09:38:43.947387332 +0000 UTC m=+12.544913189 container remove 3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe (image=quay.io/ceph/ceph:v19, name=practical_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:38:43 compute-0 systemd[1]: libpod-conmon-3ffbefadf22df65baadeb23ee7e9f2e43326393e2766d4c13823ba36bb1ca1fe.scope: Deactivated successfully.
Dec 06 09:38:43 compute-0 sudo[80602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:44 compute-0 sudo[80777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nycatufqlghjohhzdgkngduwprziuiwe ; /usr/bin/python3'
Dec 06 09:38:44 compute-0 sudo[80777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:38:44 compute-0 ceph-mon[74327]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:38:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:44 compute-0 python3[80779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:38:44 compute-0 podman[80781]: 2025-12-06 09:38:44.48322846 +0000 UTC m=+0.070715456 container create 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:38:44 compute-0 systemd[1]: Started libpod-conmon-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope.
Dec 06 09:38:44 compute-0 podman[80781]: 2025-12-06 09:38:44.451961057 +0000 UTC m=+0.039448033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:38:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:38:44 compute-0 podman[80781]: 2025-12-06 09:38:44.599306226 +0000 UTC m=+0.186793202 container init 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:38:44 compute-0 podman[80781]: 2025-12-06 09:38:44.608626494 +0000 UTC m=+0.196113470 container start 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:38:44 compute-0 podman[80781]: 2025-12-06 09:38:44.612860727 +0000 UTC m=+0.200347713 container attach 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:38:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 06 09:38:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640512377' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:38:45 compute-0 great_turing[80798]: 
Dec 06 09:38:45 compute-0 great_turing[80798]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":61,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-06T09:37:41.289249+0000","services":{}},"progress_events":{}}
Dec 06 09:38:45 compute-0 systemd[1]: libpod-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope: Deactivated successfully.
Dec 06 09:38:45 compute-0 podman[80781]: 2025-12-06 09:38:45.087913124 +0000 UTC m=+0.675400100 container died 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-534077bae119f696cc24a3307f2808b1ebdb1b43e4c24155cacd9be6e0464ea6-merged.mount: Deactivated successfully.
Dec 06 09:38:45 compute-0 podman[80781]: 2025-12-06 09:38:45.140725152 +0000 UTC m=+0.728212118 container remove 464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d (image=quay.io/ceph/ceph:v19, name=great_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:38:45 compute-0 systemd[1]: libpod-conmon-464aa29140098cde761f884ecd9b7362390786536be449d40c8522033b8db00d.scope: Deactivated successfully.
Dec 06 09:38:45 compute-0 sudo[80777]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Added host compute-2
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 09:38:45 compute-0 ceph-mon[74327]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 09:38:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2640512377' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:38:46 compute-0 ceph-mon[74327]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:48 compute-0 ceph-mon[74327]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:50 compute-0 ceph-mon[74327]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:52 compute-0 ceph-mon[74327]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:54 compute-0 ceph-mon[74327]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:38:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:56 compute-0 ceph-mon[74327]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:58 compute-0 ceph-mon[74327]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:38:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:00 compute-0 ceph-mon[74327]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:02 compute-0 ceph-mon[74327]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:39:02
Dec 06 09:39:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:39:02 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:39:02 compute-0 ceph-mgr[74618]: [balancer INFO root] No pools available
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:04 compute-0 ceph-mon[74327]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:05 compute-0 ceph-mon[74327]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:08 compute-0 ceph-mon[74327]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:10 compute-0 ceph-mon[74327]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:12 compute-0 ceph-mon[74327]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:14 compute-0 ceph-mon[74327]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:39:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:39:14 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:39:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:39:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:39:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:39:15 compute-0 sudo[80858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzutojkkcdqzqisqucvzznlcuqcxjyfj ; /usr/bin/python3'
Dec 06 09:39:15 compute-0 sudo[80858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:15 compute-0 python3[80860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:15 compute-0 podman[80862]: 2025-12-06 09:39:15.473647367 +0000 UTC m=+0.049802750 container create 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:15 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:39:15 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:39:15 compute-0 systemd[1]: Started libpod-conmon-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope.
Dec 06 09:39:15 compute-0 podman[80862]: 2025-12-06 09:39:15.449007545 +0000 UTC m=+0.025162938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:15 compute-0 podman[80862]: 2025-12-06 09:39:15.566042495 +0000 UTC m=+0.142197868 container init 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:39:15 compute-0 podman[80862]: 2025-12-06 09:39:15.581863142 +0000 UTC m=+0.158018495 container start 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec 06 09:39:15 compute-0 podman[80862]: 2025-12-06 09:39:15.58559944 +0000 UTC m=+0.161754793 container attach 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:39:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:39:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:39:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 06 09:39:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345211581' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:39:16 compute-0 mystifying_mendel[80878]: 
Dec 06 09:39:16 compute-0 mystifying_mendel[80878]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":92,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T09:39:04.942012+0000","services":{}},"progress_events":{}}
Dec 06 09:39:16 compute-0 systemd[1]: libpod-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope: Deactivated successfully.
Dec 06 09:39:16 compute-0 podman[80862]: 2025-12-06 09:39:16.049844408 +0000 UTC m=+0.625999841 container died 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0728f570ce10d27dca1ab84ba1441b24576c04ddb35bf0d559f85a6eb37e8825-merged.mount: Deactivated successfully.
Dec 06 09:39:16 compute-0 podman[80862]: 2025-12-06 09:39:16.229109386 +0000 UTC m=+0.805264739 container remove 5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b (image=quay.io/ceph/ceph:v19, name=mystifying_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:39:16 compute-0 sudo[80858]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:16 compute-0 systemd[1]: libpod-conmon-5830ce3da6b4bd596d8546874e5ec829a0d37f329f29a7429dff6d2799c2856b.scope: Deactivated successfully.
Dec 06 09:39:16 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:39:16 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:39:16 compute-0 ceph-mon[74327]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:16 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:39:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1345211581' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:39:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2))
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:39:17.213+0000 7f8d46bda640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 06 09:39:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: service_name: mon
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: placement:
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   hosts:
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-0
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-1
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-2
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:39:17.214+0000 7f8d46bda640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: service_name: mgr
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: placement:
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   hosts:
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-0
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-1
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   - compute-2
Dec 06 09:39:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:39:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 06 09:39:17 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 06 09:39:17 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 06 09:39:17 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:39:17 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:39:17 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:18 compute-0 ceph-mon[74327]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:18 compute-0 ceph-mon[74327]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 09:39:18 compute-0 ceph-mon[74327]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:18 compute-0 ceph-mon[74327]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 09:39:18 compute-0 ceph-mon[74327]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:18 compute-0 ceph-mon[74327]: Deploying daemon crash.compute-1 on compute-1
Dec 06 09:39:18 compute-0 ceph-mon[74327]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 06 09:39:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2))
Dec 06 09:39:20 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 93089259-dd77-4506-8e2a-85cee1c01235 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:20 compute-0 sudo[80914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:20 compute-0 sudo[80914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:20 compute-0 sudo[80914]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:20 compute-0 sudo[80939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:39:20 compute-0 sudo[80939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:20 compute-0 ceph-mon[74327]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:39:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:20 compute-0 podman[81004]: 2025-12-06 09:39:20.700842804 +0000 UTC m=+0.025788486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:20 compute-0 podman[81004]: 2025-12-06 09:39:20.872693647 +0000 UTC m=+0.197639319 container create 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:39:20 compute-0 systemd[1]: Started libpod-conmon-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope.
Dec 06 09:39:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:21 compute-0 podman[81004]: 2025-12-06 09:39:21.26545406 +0000 UTC m=+0.590399732 container init 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:21 compute-0 podman[81004]: 2025-12-06 09:39:21.271933858 +0000 UTC m=+0.596879520 container start 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 09:39:21 compute-0 goofy_payne[81020]: 167 167
Dec 06 09:39:21 compute-0 systemd[1]: libpod-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope: Deactivated successfully.
Dec 06 09:39:21 compute-0 podman[81004]: 2025-12-06 09:39:21.458064334 +0000 UTC m=+0.783010016 container attach 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:39:21 compute-0 podman[81004]: 2025-12-06 09:39:21.458791385 +0000 UTC m=+0.783737037 container died 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-558fd956a88cf8039dc9b37ca045489257d9c8d683d8505ea06715666f239e6d-merged.mount: Deactivated successfully.
Dec 06 09:39:21 compute-0 ceph-mon[74327]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:21 compute-0 podman[81004]: 2025-12-06 09:39:21.740821489 +0000 UTC m=+1.065767151 container remove 4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Dec 06 09:39:21 compute-0 systemd[1]: libpod-conmon-4d66448ed1a3de757b1ac9a1bf92ec49b2efcdabced61e2b4ad3fe17f214205e.scope: Deactivated successfully.
Dec 06 09:39:21 compute-0 podman[81045]: 2025-12-06 09:39:21.886024853 +0000 UTC m=+0.027458794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"} v 0)
Dec 06 09:39:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]: dispatch
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:22 compute-0 podman[81045]: 2025-12-06 09:39:22.118683822 +0000 UTC m=+0.260117753 container create 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]': finished
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 06 09:39:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:22 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:22 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:22 compute-0 systemd[1]: Started libpod-conmon-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope.
Dec 06 09:39:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:22 compute-0 podman[81045]: 2025-12-06 09:39:22.213222723 +0000 UTC m=+0.354656704 container init 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:22 compute-0 podman[81045]: 2025-12-06 09:39:22.229048169 +0000 UTC m=+0.370482090 container start 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:22 compute-0 podman[81045]: 2025-12-06 09:39:22.233421646 +0000 UTC m=+0.374855577 container attach 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:22 compute-0 magical_antonelli[81061]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:39:22 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:22 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:22 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7899c4d8-edb4-4836-b838-c4aa702ad7af
Dec 06 09:39:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 06 09:39:22 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/389473799' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 09:39:22 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]: dispatch
Dec 06 09:39:22 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3516162331' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a01bc6a6-e368-4763-a10f-41794e4ef717"}]': finished
Dec 06 09:39:22 compute-0 ceph-mon[74327]: osdmap e4: 1 total, 0 up, 1 in
Dec 06 09:39:22 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 2 completed events
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"} v 0)
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]': finished
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:23 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:23 compute-0 magical_antonelli[81061]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 06 09:39:23 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 06 09:39:23 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 09:39:23 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:23 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 06 09:39:23 compute-0 lvm[81125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:39:23 compute-0 lvm[81125]: VG ceph_vg0 finished
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/389473799' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mon[74327]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/335734850' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7899c4d8-edb4-4836-b838-c4aa702ad7af"}]': finished
Dec 06 09:39:23 compute-0 ceph-mon[74327]: osdmap e5: 2 total, 0 up, 2 in
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:23 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:24 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 06 09:39:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 06 09:39:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469659317' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 09:39:24 compute-0 magical_antonelli[81061]:  stderr: got monmap epoch 1
Dec 06 09:39:24 compute-0 magical_antonelli[81061]: --> Creating keyring file for osd.1
Dec 06 09:39:24 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 06 09:39:24 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 06 09:39:24 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 7899c4d8-edb4-4836-b838-c4aa702ad7af --setuser ceph --setgroup ceph
Dec 06 09:39:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:24 compute-0 ceph-mon[74327]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 06 09:39:24 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2469659317' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 09:39:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:25 compute-0 ceph-mon[74327]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:27 compute-0 magical_antonelli[81061]:  stderr: 2025-12-06T09:39:24.332+0000 7f9db4598740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 06 09:39:27 compute-0 magical_antonelli[81061]:  stderr: 2025-12-06T09:39:24.593+0000 7f9db4598740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 06 09:39:27 compute-0 magical_antonelli[81061]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 06 09:39:27 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:27 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 06 09:39:28 compute-0 magical_antonelli[81061]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 06 09:39:28 compute-0 systemd[1]: libpod-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Deactivated successfully.
Dec 06 09:39:28 compute-0 systemd[1]: libpod-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Consumed 2.523s CPU time.
Dec 06 09:39:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 06 09:39:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 09:39:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:28 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec 06 09:39:28 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec 06 09:39:28 compute-0 podman[82040]: 2025-12-06 09:39:28.222364184 +0000 UTC m=+0.038012600 container died 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3696a60172011108faeb846dedfdb6ff8070d7c7adbfb8a3e22a0763a589d2e4-merged.mount: Deactivated successfully.
Dec 06 09:39:28 compute-0 ceph-mon[74327]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 09:39:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:28 compute-0 podman[82040]: 2025-12-06 09:39:28.284817247 +0000 UTC m=+0.100465583 container remove 7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:28 compute-0 systemd[1]: libpod-conmon-7c1a48018c9f244b1a092a6fba9e06bd31602d2437e3b674197da135025f9a64.scope: Deactivated successfully.
Dec 06 09:39:28 compute-0 sudo[80939]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:28 compute-0 sudo[82056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:28 compute-0 sudo[82056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:28 compute-0 sudo[82056]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:28 compute-0 sudo[82081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:39:28 compute-0 sudo[82081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:28 compute-0 podman[82143]: 2025-12-06 09:39:28.98084071 +0000 UTC m=+0.086298314 container create 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 09:39:29 compute-0 systemd[1]: Started libpod-conmon-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope.
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:28.940625798 +0000 UTC m=+0.046083412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:29.264926524 +0000 UTC m=+0.370384168 container init 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:29.273846201 +0000 UTC m=+0.379303765 container start 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:39:29 compute-0 youthful_dirac[82159]: 167 167
Dec 06 09:39:29 compute-0 systemd[1]: libpod-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope: Deactivated successfully.
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:29.334692519 +0000 UTC m=+0.440150083 container attach 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:39:29 compute-0 ceph-mon[74327]: Deploying daemon osd.0 on compute-1
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:29.33678725 +0000 UTC m=+0.442244854 container died 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:39:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2571f5ea6816571cb4ea473a09801e3498e0e177a389c6c14d1c9a07bc7c4dfa-merged.mount: Deactivated successfully.
Dec 06 09:39:29 compute-0 podman[82143]: 2025-12-06 09:39:29.396123333 +0000 UTC m=+0.501580937 container remove 09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:39:29 compute-0 systemd[1]: libpod-conmon-09b858d21e497d3012857197d3cae613a4655d6eafd5ece3a79b671703da44ae.scope: Deactivated successfully.
Dec 06 09:39:29 compute-0 podman[82183]: 2025-12-06 09:39:29.620344989 +0000 UTC m=+0.085563463 container create 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:39:29 compute-0 podman[82183]: 2025-12-06 09:39:29.577866282 +0000 UTC m=+0.043084816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:29 compute-0 systemd[1]: Started libpod-conmon-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope.
Dec 06 09:39:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:30 compute-0 podman[82183]: 2025-12-06 09:39:30.035169209 +0000 UTC m=+0.500387723 container init 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:30 compute-0 podman[82183]: 2025-12-06 09:39:30.046940909 +0000 UTC m=+0.512159343 container start 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 09:39:30 compute-0 podman[82183]: 2025-12-06 09:39:30.077536273 +0000 UTC m=+0.542754707 container attach 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]: {
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:     "1": [
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:         {
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "devices": [
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "/dev/loop3"
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             ],
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "lv_name": "ceph_lv0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "lv_size": "21470642176",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "name": "ceph_lv0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "tags": {
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.cluster_name": "ceph",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.crush_device_class": "",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.encrypted": "0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.osd_id": "1",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.type": "block",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.vdo": "0",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:                 "ceph.with_tpm": "0"
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             },
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "type": "block",
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:             "vg_name": "ceph_vg0"
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:         }
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]:     ]
Dec 06 09:39:30 compute-0 stoic_keldysh[82199]: }
Dec 06 09:39:30 compute-0 systemd[1]: libpod-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope: Deactivated successfully.
Dec 06 09:39:30 compute-0 podman[82183]: 2025-12-06 09:39:30.360164046 +0000 UTC m=+0.825382470 container died 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:39:30 compute-0 ceph-mon[74327]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-db8e40b0c80d137a7cd1ce73a50c3cc4d840d07401dfab8d1e9983b9b73ea344-merged.mount: Deactivated successfully.
Dec 06 09:39:30 compute-0 podman[82183]: 2025-12-06 09:39:30.543951013 +0000 UTC m=+1.009169477 container remove 3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:39:30 compute-0 systemd[1]: libpod-conmon-3be6c90d62e8679766e5f34c2385114a41fc9e61e9f23df68fbc9f59d362d4a7.scope: Deactivated successfully.
Dec 06 09:39:30 compute-0 sudo[82081]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 06 09:39:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 09:39:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:39:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:30 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 06 09:39:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 06 09:39:30 compute-0 sudo[82220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:30 compute-0 sudo[82220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:30 compute-0 sudo[82220]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:30 compute-0 sudo[82245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:39:30 compute-0 sudo[82245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.24773197 +0000 UTC m=+0.047584446 container create bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:31 compute-0 systemd[1]: Started libpod-conmon-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope.
Dec 06 09:39:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.229577015 +0000 UTC m=+0.029429521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.329861201 +0000 UTC m=+0.129713677 container init bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.339116138 +0000 UTC m=+0.138968614 container start bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.342462046 +0000 UTC m=+0.142314522 container attach bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:31 compute-0 epic_carver[82324]: 167 167
Dec 06 09:39:31 compute-0 systemd[1]: libpod-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope: Deactivated successfully.
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.345648398 +0000 UTC m=+0.145500874 container died bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-49256ea79608eb0a822be1d90781bc48bb91fdebd83233b5c195ae26006f1b06-merged.mount: Deactivated successfully.
Dec 06 09:39:31 compute-0 podman[82308]: 2025-12-06 09:39:31.38417119 +0000 UTC m=+0.184023656 container remove bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_carver, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:39:31 compute-0 systemd[1]: libpod-conmon-bb81d54862bbd5e045d776eed32b053dcc7b40b1b837a393ec03cfe0f101be8b.scope: Deactivated successfully.
Dec 06 09:39:31 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 09:39:31 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:39:31 compute-0 ceph-mon[74327]: Deploying daemon osd.1 on compute-0
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.623831312 +0000 UTC m=+0.044788235 container create 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:39:31 compute-0 systemd[1]: Started libpod-conmon-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope.
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.604984057 +0000 UTC m=+0.025940970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.736022782 +0000 UTC m=+0.156979755 container init 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.745897438 +0000 UTC m=+0.166854361 container start 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.749783959 +0000 UTC m=+0.170740882 container attach 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 06 09:39:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]:                             [--no-systemd] [--no-tmpfs]
Dec 06 09:39:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test[82370]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 06 09:39:31 compute-0 systemd[1]: libpod-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope: Deactivated successfully.
Dec 06 09:39:31 compute-0 conmon[82370]: conmon 77430bab5b3dd3fe5413 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope/container/memory.events
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.933675141 +0000 UTC m=+0.354632044 container died 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b616f693a4b5fc931d6219f2cd4f977d68fc5df54c8f9a1d4e184d88a5be7a9-merged.mount: Deactivated successfully.
Dec 06 09:39:31 compute-0 podman[82353]: 2025-12-06 09:39:31.983691694 +0000 UTC m=+0.404648617 container remove 77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:31 compute-0 systemd[1]: libpod-conmon-77430bab5b3dd3fe54134c96095b236005ed6639cc750b3f65c49668f3017809.scope: Deactivated successfully.
Dec 06 09:39:32 compute-0 systemd[1]: Reloading.
Dec 06 09:39:32 compute-0 systemd-sysv-generator[82435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:39:32 compute-0 systemd-rc-local-generator[82430]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:39:32 compute-0 ceph-mon[74327]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:32 compute-0 systemd[1]: Reloading.
Dec 06 09:39:32 compute-0 systemd-rc-local-generator[82468]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:39:32 compute-0 systemd-sysv-generator[82471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:39:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:32 compute-0 systemd[1]: Starting Ceph osd.1 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:39:33 compute-0 podman[82529]: 2025-12-06 09:39:33.135656005 +0000 UTC m=+0.061630831 container create 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:39:33 compute-0 podman[82529]: 2025-12-06 09:39:33.107169892 +0000 UTC m=+0.033144738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:33 compute-0 podman[82529]: 2025-12-06 09:39:33.243454348 +0000 UTC m=+0.169429244 container init 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 09:39:33 compute-0 podman[82529]: 2025-12-06 09:39:33.258606266 +0000 UTC m=+0.184581092 container start 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:33 compute-0 podman[82529]: 2025-12-06 09:39:33.26289106 +0000 UTC m=+0.188865896 container attach 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:39:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:33 compute-0 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:33 compute-0 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:33 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:33 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:33 compute-0 ceph-mon[74327]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:34 compute-0 lvm[82626]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:39:34 compute-0 lvm[82626]: VG ceph_vg0 finished
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:34 compute-0 bash[82529]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 06 09:39:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 09:39:34 compute-0 bash[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:34 compute-0 bash[82529]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 09:39:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate[82545]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 06 09:39:34 compute-0 systemd[1]: libpod-8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6.scope: Deactivated successfully.
Dec 06 09:39:34 compute-0 podman[82529]: 2025-12-06 09:39:34.643991667 +0000 UTC m=+1.569966503 container died 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:39:34 compute-0 systemd[1]: libpod-8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6.scope: Consumed 1.626s CPU time.
Dec 06 09:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba44767d631a4b8fa395fe9ce39363e25139b71e50310bc22614b62b43949884-merged.mount: Deactivated successfully.
Dec 06 09:39:34 compute-0 podman[82529]: 2025-12-06 09:39:34.700463718 +0000 UTC m=+1.626438554 container remove 8bb2aad8546507e838650dd9a645d6dcbdc533e2b1856a4e142875cd09d8aed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:34 compute-0 podman[82784]: 2025-12-06 09:39:34.873671911 +0000 UTC m=+0.021141961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:35 compute-0 podman[82784]: 2025-12-06 09:39:35.003311834 +0000 UTC m=+0.150781854 container create 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2765992040f39f73be307b70866c44d4c3d28535e38e762102b3a85cc1e4d93d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:35 compute-0 podman[82784]: 2025-12-06 09:39:35.176883907 +0000 UTC m=+0.324353937 container init 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:39:35 compute-0 podman[82784]: 2025-12-06 09:39:35.186094634 +0000 UTC m=+0.333564634 container start 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:35 compute-0 ceph-osd[82803]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:39:35 compute-0 ceph-osd[82803]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 06 09:39:35 compute-0 ceph-osd[82803]: pidfile_write: ignore empty --pid-file
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:35 compute-0 bash[82784]: 1aa09529261e3879ba22be6280df329426f42169aef976622e663976b0bb06ec
Dec 06 09:39:35 compute-0 systemd[1]: Started Ceph osd.1 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:39:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:35 compute-0 sudo[82245]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:39:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:39:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:35 compute-0 sudo[82820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:35 compute-0 sudo[82820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:35 compute-0 sudo[82820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:35 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:35 compute-0 sudo[82845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:39:35 compute-0 sudo[82845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 06 09:39:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.299584092 +0000 UTC m=+0.066912033 container create 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:36 compute-0 systemd[1]: Started libpod-conmon-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope.
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.278204304 +0000 UTC m=+0.045532365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcdd745800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.411838034 +0000 UTC m=+0.179166025 container init 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.419110815 +0000 UTC m=+0.186438766 container start 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.422438551 +0000 UTC m=+0.189766502 container attach 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:39:36 compute-0 nice_ritchie[82939]: 167 167
Dec 06 09:39:36 compute-0 systemd[1]: libpod-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope: Deactivated successfully.
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.426984092 +0000 UTC m=+0.194312043 container died 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f474c1e99f3993961332dd72f5e1407a84ce9db9e63c5d20a5e6afeed0162f4-merged.mount: Deactivated successfully.
Dec 06 09:39:36 compute-0 podman[82921]: 2025-12-06 09:39:36.476664006 +0000 UTC m=+0.243991987 container remove 5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:36 compute-0 systemd[1]: libpod-conmon-5e9401f2e8654a9b6ef3c5830efc9d213440f56ce5c8268c5ab24d8d3195cf8f.scope: Deactivated successfully.
Dec 06 09:39:36 compute-0 ceph-mon[74327]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:36 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:36 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:36 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:36 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:36 compute-0 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 06 09:39:36 compute-0 ceph-osd[82803]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 06 09:39:36 compute-0 ceph-osd[82803]: load: jerasure load: lrc 
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 06 09:39:36 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec 06 09:39:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:36 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:36 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:36 compute-0 podman[82971]: 2025-12-06 09:39:36.725982767 +0000 UTC m=+0.064284688 container create da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:36 compute-0 systemd[1]: Started libpod-conmon-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope.
Dec 06 09:39:36 compute-0 podman[82971]: 2025-12-06 09:39:36.69839936 +0000 UTC m=+0.036701331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:36 compute-0 podman[82971]: 2025-12-06 09:39:36.829216409 +0000 UTC m=+0.167518370 container init da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 09:39:36 compute-0 podman[82971]: 2025-12-06 09:39:36.843154711 +0000 UTC m=+0.181456622 container start da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:39:36 compute-0 podman[82971]: 2025-12-06 09:39:36.847728794 +0000 UTC m=+0.186030775 container attach da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 09:39:36 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:37 compute-0 ceph-osd[82803]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde616c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount shared_bdev_used = 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: RocksDB version: 7.9.2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Git sha 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DB SUMMARY
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DB Session ID:  CURIEEK1KVXV3KZ3OECU
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: CURRENT file:  CURRENT
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.error_if_exists: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.create_if_missing: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                     Options.env: 0x55fcde5b1dc0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                Options.info_log: 0x55fcde5b57a0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.statistics: (nil)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.use_fsync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.db_log_dir: 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.write_buffer_manager: 0x55fcde6e0a00
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.unordered_write: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.row_cache: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.wal_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.two_write_queues: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.wal_compression: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.atomic_flush: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_background_jobs: 4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_background_compactions: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_subcompactions: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.max_open_files: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Compression algorithms supported:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZSTD supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kXpressCompression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kBZip2Compression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kLZ4Compression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZlibCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kSnappyCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7903e0e-8b62-45e4-a979-56d5b4ac2659
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977293236, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977293502, "job": 1, "event": "recovery_finished"}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: freelist init
Dec 06 09:39:37 compute-0 ceph-osd[82803]: freelist _read_cfg
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs umount
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 09:39:37 compute-0 lvm[83265]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:39:37 compute-0 lvm[83265]: VG ceph_vg0 finished
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bdev(0x55fcde617000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluefs mount shared_bdev_used = 4718592
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: RocksDB version: 7.9.2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Git sha 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DB SUMMARY
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DB Session ID:  CURIEEK1KVXV3KZ3OECV
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: CURRENT file:  CURRENT
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.error_if_exists: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.create_if_missing: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                     Options.env: 0x55fcde784310
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                Options.info_log: 0x55fcde5b5920
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.statistics: (nil)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.use_fsync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.db_log_dir: 
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.write_buffer_manager: 0x55fcde6e0a00
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.unordered_write: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.row_cache: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                              Options.wal_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.two_write_queues: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.wal_compression: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.atomic_flush: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_background_jobs: 4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_background_compactions: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_subcompactions: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.max_open_files: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Compression algorithms supported:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZSTD supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kXpressCompression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kBZip2Compression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kLZ4Compression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kZlibCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         kSnappyCompression supported: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 condescending_clarke[82988]: {}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7db350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:           Options.merge_operator: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fcde5b5ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fcdd7da9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.compression: LZ4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.num_levels: 7
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.bloom_locality: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                               Options.ttl: 2592000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                       Options.enable_blob_files: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                           Options.min_blob_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7903e0e-8b62-45e4-a979-56d5b4ac2659
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977564407, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977568341, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977572405, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977593233, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7903e0e-8b62-45e4-a979-56d5b4ac2659", "db_session_id": "CURIEEK1KVXV3KZ3OECV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765013977597544, "job": 1, "event": "recovery_finished"}
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:37 compute-0 systemd[1]: libpod-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Deactivated successfully.
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:37 compute-0 systemd[1]: libpod-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Consumed 1.230s CPU time.
Dec 06 09:39:37 compute-0 podman[82971]: 2025-12-06 09:39:37.6817506 +0000 UTC m=+1.020052481 container died da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fcde7d6000
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: DB pointer 0x55fcde792000
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 06 09:39:37 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 06 09:39:37 compute-0 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 06 09:39:37 compute-0 ceph-mon[74327]: osdmap e6: 2 total, 0 up, 2 in
Dec 06 09:39:37 compute-0 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 06 09:39:37 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:37 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:37 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:37 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:39:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 09:39:37 compute-0 ceph-osd[82803]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 06 09:39:37 compute-0 ceph-osd[82803]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 06 09:39:37 compute-0 ceph-osd[82803]: _get_class not permitted to load lua
Dec 06 09:39:37 compute-0 ceph-osd[82803]: _get_class not permitted to load sdk
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 load_pgs
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 load_pgs opened 0 pgs
Dec 06 09:39:37 compute-0 ceph-osd[82803]: osd.1 0 log_to_monitors true
Dec 06 09:39:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:37.688+0000 7f0f6fd25740 -1 osd.1 0 log_to_monitors true
Dec 06 09:39:37 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:37 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 06 09:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd982a45f4af60bc4744bc12373d3739158f6c7221ee673fe9ddd47c180cdcfa-merged.mount: Deactivated successfully.
Dec 06 09:39:37 compute-0 podman[82971]: 2025-12-06 09:39:37.729104398 +0000 UTC m=+1.067406279 container remove da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:37 compute-0 systemd[1]: libpod-conmon-da29bf9c95f326772be3e765b109da7d6f7e056e1dd1669bd655630f48c152f8.scope: Deactivated successfully.
Dec 06 09:39:37 compute-0 sudo[82845]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:39:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:37 compute-0 sudo[83493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:39:37 compute-0 sudo[83493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:37 compute-0 sudo[83493]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:38 compute-0 sudo[83518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:38 compute-0 sudo[83518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:38 compute-0 sudo[83518]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:38 compute-0 sudo[83543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:39:38 compute-0 sudo[83543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:38 compute-0 ceph-mon[74327]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 06 09:39:38 compute-0 ceph-mon[74327]: osdmap e7: 2 total, 0 up, 2 in
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 06 09:39:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 06 09:39:38 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:38 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:38 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:38 compute-0 podman[83637]: 2025-12-06 09:39:38.808701679 +0000 UTC m=+0.070160908 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:38 compute-0 podman[83637]: 2025-12-06 09:39:38.927991953 +0000 UTC m=+0.189451222 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:39:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:39 compute-0 sudo[83543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 sudo[83725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:39 compute-0 sudo[83725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:39 compute-0 sudo[83725]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 sudo[83750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:39:39 compute-0 sudo[83750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: purged_snaps scrub starts
Dec 06 09:39:39 compute-0 ceph-mon[74327]: purged_snaps scrub ok
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 06 09:39:39 compute-0 ceph-mon[74327]: osdmap e8: 2 total, 0 up, 2 in
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 done with init, starting boot process
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 start_boot
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 06 09:39:39 compute-0 ceph-osd[82803]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:39 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:40 compute-0 sudo[83750]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:40 compute-0 sudo[83807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:40 compute-0 sudo[83807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:40 compute-0 sudo[83807]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:40 compute-0 sudo[83832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- inventory --format=json-pretty --filter-for-batch
Dec 06 09:39:40 compute-0 sudo[83832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.667904014 +0000 UTC m=+0.065201594 container create abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:39:40 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:40 compute-0 ceph-mon[74327]: purged_snaps scrub starts
Dec 06 09:39:40 compute-0 ceph-mon[74327]: purged_snaps scrub ok
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 06 09:39:40 compute-0 ceph-mon[74327]: osdmap e9: 2 total, 0 up, 2 in
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:40 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:40 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.63037918 +0000 UTC m=+0.027676750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:40 compute-0 systemd[1]: Started libpod-conmon-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope.
Dec 06 09:39:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.786251413 +0000 UTC m=+0.183549003 container init abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.795411217 +0000 UTC m=+0.192708817 container start abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:39:40 compute-0 agitated_tesla[83913]: 167 167
Dec 06 09:39:40 compute-0 systemd[1]: libpod-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope: Deactivated successfully.
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.822889591 +0000 UTC m=+0.220187161 container attach abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.823761136 +0000 UTC m=+0.221058696 container died abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b5cebfd76762e367e64435747ac99f35e7965f3bb6168da3c600ad6d0321d8-merged.mount: Deactivated successfully.
Dec 06 09:39:40 compute-0 podman[83897]: 2025-12-06 09:39:40.965427368 +0000 UTC m=+0.362724948 container remove abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 09:39:40 compute-0 systemd[1]: libpod-conmon-abba0a8e985798f6b040b2eff9fca8b70cc10141979c548851bcf8a2815a0fe7.scope: Deactivated successfully.
Dec 06 09:39:41 compute-0 podman[83938]: 2025-12-06 09:39:41.157990859 +0000 UTC m=+0.069828008 container create 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:41 compute-0 podman[83938]: 2025-12-06 09:39:41.114949686 +0000 UTC m=+0.026786885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:39:41 compute-0 systemd[1]: Started libpod-conmon-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope.
Dec 06 09:39:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:41 compute-0 podman[83938]: 2025-12-06 09:39:41.281321061 +0000 UTC m=+0.193158230 container init 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 09:39:41 compute-0 podman[83938]: 2025-12-06 09:39:41.288233141 +0000 UTC m=+0.200070290 container start 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:41 compute-0 podman[83938]: 2025-12-06 09:39:41.309723542 +0000 UTC m=+0.221560681 container attach 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:39:41 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:41 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:41 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:41 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:41 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:41 compute-0 ceph-mon[74327]: pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:41 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:42 compute-0 great_chebyshev[83954]: [
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:     {
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "available": false,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "being_replaced": false,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "ceph_device_lvm": false,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "lsm_data": {},
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "lvs": [],
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "path": "/dev/sr0",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "rejected_reasons": [
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "Insufficient space (<5GB)",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "Has a FileSystem"
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         ],
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         "sys_api": {
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "actuators": null,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "device_nodes": [
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:                 "sr0"
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             ],
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "devname": "sr0",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "human_readable_size": "482.00 KB",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "id_bus": "ata",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "model": "QEMU DVD-ROM",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "nr_requests": "2",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "parent": "/dev/sr0",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "partitions": {},
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "path": "/dev/sr0",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "removable": "1",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "rev": "2.5+",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "ro": "0",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "rotational": "1",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "sas_address": "",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "sas_device_handle": "",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "scheduler_mode": "mq-deadline",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "sectors": 0,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "sectorsize": "2048",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "size": 493568.0,
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "support_discard": "2048",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "type": "disk",
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:             "vendor": "QEMU"
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:         }
Dec 06 09:39:42 compute-0 great_chebyshev[83954]:     }
Dec 06 09:39:42 compute-0 great_chebyshev[83954]: ]
Dec 06 09:39:42 compute-0 systemd[1]: libpod-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope: Deactivated successfully.
Dec 06 09:39:42 compute-0 podman[83938]: 2025-12-06 09:39:42.048766177 +0000 UTC m=+0.960603346 container died 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:42 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:42 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:42 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:42 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:43 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:43 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:43 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2d1ab7e710fbc5dfa2f62354b194a3015ba4d9f24b8dd803a373e2d08b0bda9-merged.mount: Deactivated successfully.
Dec 06 09:39:43 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:43 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:43 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:43 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:44 compute-0 podman[83938]: 2025-12-06 09:39:44.020843782 +0000 UTC m=+2.932680961 container remove 87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:44 compute-0 systemd[1]: libpod-conmon-87f98aba4075c90c9b193e8e60d3c00289c4177ccc70bb06eee8b2094c557430.scope: Deactivated successfully.
Dec 06 09:39:44 compute-0 sudo[83832]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:44 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:39:45 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.703 iops: 6324.094 elapsed_sec: 0.474
Dec 06 09:39:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [WRN] : OSD bench result of 6324.094408 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 0 waiting for initial osdmap
Dec 06 09:39:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:45.621+0000 7f0f6bca8640 -1 osd.1 0 waiting for initial osdmap
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Dec 06 09:39:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-osd-1[82799]: 2025-12-06T09:39:45.647+0000 7f0f672d0640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 set_numa_affinity not setting numa affinity
Dec 06 09:39:45 compute-0 ceph-osd[82803]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2585020672; not ready for session (expect reconnect)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:45 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:46 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 06 09:39:46 compute-0 ceph-mon[74327]: pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 09:39:46 compute-0 ceph-mon[74327]: OSD bench result of 6324.094408 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 09:39:46 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:46 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Dec 06 09:39:46 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672] boot
Dec 06 09:39:46 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:39:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:46 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:46 compute-0 ceph-osd[82803]: osd.1 10 state: booting -> active
Dec 06 09:39:46 compute-0 sudo[85069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thzvzxygkzogyuxdlgvtxnetvfvqgzir ; /usr/bin/python3'
Dec 06 09:39:46 compute-0 sudo[85069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:46 compute-0 python3[85071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:46 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:46 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:46 compute-0 podman[85073]: 2025-12-06 09:39:46.646167385 +0000 UTC m=+0.040264274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:46 compute-0 podman[85073]: 2025-12-06 09:39:46.840107716 +0000 UTC m=+0.234204605 container create 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:46 compute-0 systemd[1]: Started libpod-conmon-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope.
Dec 06 09:39:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:46 compute-0 podman[85073]: 2025-12-06 09:39:46.954014036 +0000 UTC m=+0.348110985 container init 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:39:46 compute-0 podman[85073]: 2025-12-06 09:39:46.966121786 +0000 UTC m=+0.360218675 container start 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:39:46 compute-0 podman[85073]: 2025-12-06 09:39:46.969799912 +0000 UTC m=+0.363896821 container attach 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:39:47 compute-0 ceph-mgr[74618]: [devicehealth INFO root] creating mgr pool
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 09:39:47 compute-0 ceph-mon[74327]: osd.1 [v2:192.168.122.100:6802/2585020672,v1:192.168.122.100:6803/2585020672] boot
Dec 06 09:39:47 compute-0 ceph-mon[74327]: osdmap e10: 2 total, 1 up, 2 in
Dec 06 09:39:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:39:47 compute-0 ceph-osd[82803]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 06 09:39:47 compute-0 ceph-osd[82803]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 06 09:39:47 compute-0 ceph-osd[82803]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:39:47 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530193031' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:39:47 compute-0 magical_dewdney[85090]: 
Dec 06 09:39:47 compute-0 magical_dewdney[85090]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":123,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1765013986,"num_in_osds":2,"osd_in_since":1765013963,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446988288,"bytes_avail":21023653888,"bytes_total":21470642176},"fsmap":{"epoch":1,"btime":"2025-12-06T09:37:41:285728+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T09:39:04.942012+0000","services":{}},"progress_events":{}}
Dec 06 09:39:47 compute-0 systemd[1]: libpod-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope: Deactivated successfully.
Dec 06 09:39:47 compute-0 podman[85073]: 2025-12-06 09:39:47.482814788 +0000 UTC m=+0.876911647 container died 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:39:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad7cf690b5146544cbb329ed3c7cd03a03994f07e7c4eae0d0466b282744d91-merged.mount: Deactivated successfully.
Dec 06 09:39:47 compute-0 podman[85073]: 2025-12-06 09:39:47.519198929 +0000 UTC m=+0.913295788 container remove 1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba (image=quay.io/ceph/ceph:v19, name=magical_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:39:47 compute-0 systemd[1]: libpod-conmon-1683510a07c618c2f86ce9a802a091ac17748f10efd82796533f952e582921ba.scope: Deactivated successfully.
Dec 06 09:39:47 compute-0 sudo[85069]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:47 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/4293311283; not ready for session (expect reconnect)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:47 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 09:39:47 compute-0 sudo[85151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkciwflcprilcccakkxpkwsgnascbjkd ; /usr/bin/python3'
Dec 06 09:39:47 compute-0 sudo[85151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:48 compute-0 python3[85153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:48 compute-0 podman[85154]: 2025-12-06 09:39:48.056326442 +0000 UTC m=+0.024401966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:48 compute-0 podman[85154]: 2025-12-06 09:39:48.178979305 +0000 UTC m=+0.147054759 container create 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:39:48 compute-0 systemd[1]: Started libpod-conmon-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope.
Dec 06 09:39:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:48 compute-0 podman[85154]: 2025-12-06 09:39:48.264580737 +0000 UTC m=+0.232656211 container init 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:48 compute-0 podman[85154]: 2025-12-06 09:39:48.271929889 +0000 UTC m=+0.240005333 container start 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:39:48 compute-0 podman[85154]: 2025-12-06 09:39:48.277269233 +0000 UTC m=+0.245344697 container attach 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 06 09:39:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283] boot
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec 06 09:39:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 06 09:39:48 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 06 09:39:48 compute-0 ceph-mon[74327]: osdmap e11: 2 total, 1 up, 2 in
Dec 06 09:39:48 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3530193031' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: OSD bench result of 5666.545158 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 09:39:48 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:39:48 compute-0 ceph-mgr[74618]: [devicehealth INFO root] creating main.db for devicehealth
Dec 06 09:39:48 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 06 09:39:48 compute-0 sudo[85204]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 06 09:39:48 compute-0 sudo[85204]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:39:48 compute-0 sudo[85204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 06 09:39:48 compute-0 sudo[85204]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 06 09:39:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:39:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:39:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 06 09:39:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 06 09:39:49 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 106s)
Dec 06 09:39:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec 06 09:39:49 compute-0 unruffled_brown[85169]: pool 'vms' created
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 06 09:39:49 compute-0 ceph-mon[74327]: osd.0 [v2:192.168.122.101:6800/4293311283,v1:192.168.122.101:6801/4293311283] boot
Dec 06 09:39:49 compute-0 ceph-mon[74327]: osdmap e12: 2 total, 2 up, 2 in
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:39:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:49 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec 06 09:39:49 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:39:49 compute-0 systemd[1]: libpod-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope: Deactivated successfully.
Dec 06 09:39:49 compute-0 podman[85154]: 2025-12-06 09:39:49.365707268 +0000 UTC m=+1.333782712 container died 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf52e731fd2c02df7096c7c1441b148e8e70645b7c0b33b2bc32105c5a4827ee-merged.mount: Deactivated successfully.
Dec 06 09:39:49 compute-0 podman[85154]: 2025-12-06 09:39:49.416069323 +0000 UTC m=+1.384144777 container remove 9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267 (image=quay.io/ceph/ceph:v19, name=unruffled_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:39:49 compute-0 systemd[1]: libpod-conmon-9df2d338d1f459b7bb53f8939fde836c434910e6e854925d4283ff426cbaf267.scope: Deactivated successfully.
Dec 06 09:39:49 compute-0 sudo[85151]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:49 compute-0 sudo[85246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lroddfoxonywogalrvsjrrwgrxsgtrjp ; /usr/bin/python3'
Dec 06 09:39:49 compute-0 sudo[85246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:49 compute-0 python3[85248]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:49 compute-0 podman[85249]: 2025-12-06 09:39:49.800927298 +0000 UTC m=+0.027511975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 06 09:39:50 compute-0 podman[85249]: 2025-12-06 09:39:50.499832614 +0000 UTC m=+0.726417231 container create 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec 06 09:39:50 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec 06 09:39:50 compute-0 ceph-mon[74327]: pgmap v58: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 06 09:39:50 compute-0 ceph-mon[74327]: mgrmap e9: compute-0.qhdjwa(active, since 106s)
Dec 06 09:39:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1916681859' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:50 compute-0 ceph-mon[74327]: osdmap e13: 2 total, 2 up, 2 in
Dec 06 09:39:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:39:50 compute-0 systemd[1]: Started libpod-conmon-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope.
Dec 06 09:39:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:50 compute-0 podman[85249]: 2025-12-06 09:39:50.608018789 +0000 UTC m=+0.834603416 container init 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:50 compute-0 podman[85249]: 2025-12-06 09:39:50.621021864 +0000 UTC m=+0.847606481 container start 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:39:50 compute-0 podman[85249]: 2025-12-06 09:39:50.640437274 +0000 UTC m=+0.867021901 container attach 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:39:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 2 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 06 09:39:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 06 09:39:51 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:39:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec 06 09:39:52 compute-0 funny_dewdney[85264]: pool 'volumes' created
Dec 06 09:39:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec 06 09:39:52 compute-0 ceph-mon[74327]: osdmap e14: 2 total, 2 up, 2 in
Dec 06 09:39:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:52 compute-0 systemd[1]: libpod-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope: Deactivated successfully.
Dec 06 09:39:52 compute-0 podman[85249]: 2025-12-06 09:39:52.135231856 +0000 UTC m=+2.361816463 container died 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97cea853dbf9130963304a957dfed8f1d435904448ce3de2a0bd4925c625425-merged.mount: Deactivated successfully.
Dec 06 09:39:52 compute-0 podman[85249]: 2025-12-06 09:39:52.1793558 +0000 UTC m=+2.405940377 container remove 622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9 (image=quay.io/ceph/ceph:v19, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:52 compute-0 systemd[1]: libpod-conmon-622ccc8ac14fe3cda5762905729bce19f4ce514e727b9bbc33768beb770fb5e9.scope: Deactivated successfully.
Dec 06 09:39:52 compute-0 sudo[85246]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:52 compute-0 sudo[85326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiknggvitclvrmkikhwhrxiulnqnoxuq ; /usr/bin/python3'
Dec 06 09:39:52 compute-0 sudo[85326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:52 compute-0 python3[85328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:52 compute-0 podman[85329]: 2025-12-06 09:39:52.719556902 +0000 UTC m=+0.090557817 container create e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:39:52 compute-0 podman[85329]: 2025-12-06 09:39:52.657655394 +0000 UTC m=+0.028656399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:52 compute-0 systemd[1]: Started libpod-conmon-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope.
Dec 06 09:39:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:52 compute-0 podman[85329]: 2025-12-06 09:39:52.808510571 +0000 UTC m=+0.179511526 container init e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:39:52 compute-0 podman[85329]: 2025-12-06 09:39:52.819291302 +0000 UTC m=+0.190292207 container start e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:39:52 compute-0 podman[85329]: 2025-12-06 09:39:52.82370981 +0000 UTC m=+0.194710755 container attach e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:39:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 06 09:39:53 compute-0 ceph-mon[74327]: pgmap v61: 2 pgs: 2 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 06 09:39:53 compute-0 ceph-mon[74327]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:39:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/652672954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:53 compute-0 ceph-mon[74327]: osdmap e15: 2 total, 2 up, 2 in
Dec 06 09:39:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec 06 09:39:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec 06 09:39:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v64: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:39:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:39:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 06 09:39:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec 06 09:39:54 compute-0 musing_shtern[85344]: pool 'backups' created
Dec 06 09:39:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec 06 09:39:54 compute-0 ceph-mon[74327]: osdmap e16: 2 total, 2 up, 2 in
Dec 06 09:39:54 compute-0 ceph-mon[74327]: pgmap v64: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:39:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:54 compute-0 systemd[1]: libpod-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope: Deactivated successfully.
Dec 06 09:39:54 compute-0 podman[85329]: 2025-12-06 09:39:54.224049285 +0000 UTC m=+1.595050210 container died e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-21674f9ed1e08dddc4b4cec40f9e266cb78b7dc81c3f358cdc1a5d6b4705a78b-merged.mount: Deactivated successfully.
Dec 06 09:39:54 compute-0 podman[85329]: 2025-12-06 09:39:54.267598348 +0000 UTC m=+1.638599253 container remove e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277 (image=quay.io/ceph/ceph:v19, name=musing_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:39:54 compute-0 systemd[1]: libpod-conmon-e5d305253987b027febb9bc5b7f43bfcbd9fe3668d9ae304caf75fa51f888277.scope: Deactivated successfully.
Dec 06 09:39:54 compute-0 sudo[85326]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:39:54 compute-0 sudo[85404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubazccmfuueqcoxhlhgcrabjtekkywei ; /usr/bin/python3'
Dec 06 09:39:54 compute-0 sudo[85404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:39:54 compute-0 python3[85406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:39:54 compute-0 podman[85407]: 2025-12-06 09:39:54.67103381 +0000 UTC m=+0.039507302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:39:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 06 09:39:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:39:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v67: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:39:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec 06 09:39:58 compute-0 podman[85407]: 2025-12-06 09:39:58.122847445 +0000 UTC m=+3.491320927 container create 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:39:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec 06 09:39:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:39:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2220711561' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:58 compute-0 ceph-mon[74327]: osdmap e17: 2 total, 2 up, 2 in
Dec 06 09:39:58 compute-0 systemd[1]: Started libpod-conmon-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope.
Dec 06 09:39:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:39:58 compute-0 podman[85407]: 2025-12-06 09:39:58.292068362 +0000 UTC m=+3.660541814 container init 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:39:58 compute-0 podman[85407]: 2025-12-06 09:39:58.298903034 +0000 UTC m=+3.667376466 container start 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:39:58 compute-0 podman[85407]: 2025-12-06 09:39:58.302545061 +0000 UTC m=+3.671018523 container attach 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:39:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:39:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:39:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 06 09:39:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:39:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:39:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec 06 09:39:59 compute-0 optimistic_gagarin[85422]: pool 'images' created
Dec 06 09:39:59 compute-0 podman[85407]: 2025-12-06 09:39:59.996840834 +0000 UTC m=+5.365314276 container died 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:39:59 compute-0 systemd[1]: libpod-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope: Deactivated successfully.
Dec 06 09:40:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec 06 09:40:00 compute-0 ceph-mon[74327]: pgmap v66: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:00 compute-0 ceph-mon[74327]: pgmap v67: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:00 compute-0 ceph-mon[74327]: osdmap e18: 2 total, 2 up, 2 in
Dec 06 09:40:00 compute-0 ceph-mon[74327]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:40:00 compute-0 ceph-mon[74327]: pgmap v69: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-28e602cd08424c74e1ae56a3fd9acea742ee3b8b4620419111bb4cfdf2cbf2e9-merged.mount: Deactivated successfully.
Dec 06 09:40:01 compute-0 podman[85407]: 2025-12-06 09:40:01.083239653 +0000 UTC m=+6.451713125 container remove 5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850 (image=quay.io/ceph/ceph:v19, name=optimistic_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:01 compute-0 systemd[1]: libpod-conmon-5b03bff9f614219b4ce768d79cb9fc9aab31fd2a144dc741c026749f2a74c850.scope: Deactivated successfully.
Dec 06 09:40:01 compute-0 sudo[85404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:01 compute-0 sudo[85484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igvstmhfrpzssradekghlslidbdfpblu ; /usr/bin/python3'
Dec 06 09:40:01 compute-0 sudo[85484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:01 compute-0 python3[85486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:01 compute-0 podman[85487]: 2025-12-06 09:40:01.420344584 +0000 UTC m=+0.044047859 container create d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 06 09:40:01 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2516193572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:40:01 compute-0 ceph-mon[74327]: osdmap e19: 2 total, 2 up, 2 in
Dec 06 09:40:01 compute-0 systemd[1]: Started libpod-conmon-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope.
Dec 06 09:40:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec 06 09:40:01 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec 06 09:40:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:01 compute-0 podman[85487]: 2025-12-06 09:40:01.401677169 +0000 UTC m=+0.025380464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:01 compute-0 podman[85487]: 2025-12-06 09:40:01.500688549 +0000 UTC m=+0.124391854 container init d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:01 compute-0 podman[85487]: 2025-12-06 09:40:01.507648095 +0000 UTC m=+0.131351370 container start d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:01 compute-0 podman[85487]: 2025-12-06 09:40:01.517731762 +0000 UTC m=+0.141435037 container attach d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:40:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:40:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:40:02
Dec 06 09:40:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:40:02 compute-0 ceph-mgr[74618]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 06 09:40:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:40:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:03 compute-0 ceph-mon[74327]: pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:03 compute-0 ceph-mon[74327]: osdmap e20: 2 total, 2 up, 2 in
Dec 06 09:40:03 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:40:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:40:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec 06 09:40:03 compute-0 gallant_lumiere[85502]: pool 'cephfs.cephfs.meta' created
Dec 06 09:40:03 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec 06 09:40:03 compute-0 systemd[1]: libpod-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope: Deactivated successfully.
Dec 06 09:40:03 compute-0 podman[85529]: 2025-12-06 09:40:03.163043815 +0000 UTC m=+0.027614216 container died d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-db08c98fd338c31f4167bef0bb546b889aef87a0282036d0f709a743d23c4e57-merged.mount: Deactivated successfully.
Dec 06 09:40:03 compute-0 podman[85529]: 2025-12-06 09:40:03.200854472 +0000 UTC m=+0.065424783 container remove d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2 (image=quay.io/ceph/ceph:v19, name=gallant_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:40:03 compute-0 systemd[1]: libpod-conmon-d0c583521725cab22d6e27cb8b0b84f75001114db6fcd7a216171c553dd801b2.scope: Deactivated successfully.
Dec 06 09:40:03 compute-0 sudo[85484]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:03 compute-0 sudo[85568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seoypacllmckszxcofptiqhjrrexrvtj ; /usr/bin/python3'
Dec 06 09:40:03 compute-0 sudo[85568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:03 compute-0 python3[85570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:03 compute-0 podman[85571]: 2025-12-06 09:40:03.568066569 +0000 UTC m=+0.066226469 container create f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:03 compute-0 systemd[1]: Started libpod-conmon-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope.
Dec 06 09:40:03 compute-0 podman[85571]: 2025-12-06 09:40:03.538655015 +0000 UTC m=+0.036814985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:03 compute-0 podman[85571]: 2025-12-06 09:40:03.661703015 +0000 UTC m=+0.159862965 container init f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:40:03 compute-0 podman[85571]: 2025-12-06 09:40:03.670615555 +0000 UTC m=+0.168775485 container start f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 09:40:03 compute-0 podman[85571]: 2025-12-06 09:40:03.675969558 +0000 UTC m=+0.174129458 container attach f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:04 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1245180232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:40:04 compute-0 ceph-mon[74327]: osdmap e21: 2 total, 2 up, 2 in
Dec 06 09:40:04 compute-0 ceph-mon[74327]: pgmap v74: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec 06 09:40:04 compute-0 pedantic_volhard[85587]: pool 'cephfs.cephfs.data' created
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec 06 09:40:04 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:04 compute-0 systemd[1]: libpod-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope: Deactivated successfully.
Dec 06 09:40:04 compute-0 podman[85571]: 2025-12-06 09:40:04.123150529 +0000 UTC m=+0.621310429 container died f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a51b4f22a977b2a194508918095350f884936d45f15c1fd20abaf73ff4efe528-merged.mount: Deactivated successfully.
Dec 06 09:40:04 compute-0 podman[85571]: 2025-12-06 09:40:04.187386042 +0000 UTC m=+0.685545942 container remove f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8 (image=quay.io/ceph/ceph:v19, name=pedantic_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:04 compute-0 systemd[1]: libpod-conmon-f26d3f725d8e98d791b7e479867da7b832b34cdcf141d6eb02ea4464b4047ea8.scope: Deactivated successfully.
Dec 06 09:40:04 compute-0 sudo[85568]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:04 compute-0 sudo[85649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdsbmrhwclyadlmnchtepxbltqdesmfe ; /usr/bin/python3'
Dec 06 09:40:04 compute-0 sudo[85649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:04 compute-0 python3[85651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:04 compute-0 podman[85652]: 2025-12-06 09:40:04.559961444 +0000 UTC m=+0.056382169 container create 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:40:04 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:40:04 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:40:04 compute-0 systemd[1]: Started libpod-conmon-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope.
Dec 06 09:40:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:04 compute-0 podman[85652]: 2025-12-06 09:40:04.542274591 +0000 UTC m=+0.038695336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:04 compute-0 podman[85652]: 2025-12-06 09:40:04.640551927 +0000 UTC m=+0.136972682 container init 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:04 compute-0 podman[85652]: 2025-12-06 09:40:04.648867357 +0000 UTC m=+0.145288082 container start 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:40:04 compute-0 podman[85652]: 2025-12-06 09:40:04.65204142 +0000 UTC m=+0.148462145 container attach 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:40:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/273132572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 09:40:05 compute-0 ceph-mon[74327]: osdmap e22: 2 total, 2 up, 2 in
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:40:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec 06 09:40:05 compute-0 thirsty_wright[85668]: enabled application 'rbd' on pool 'vms'
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:05 compute-0 systemd[1]: libpod-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope: Deactivated successfully.
Dec 06 09:40:05 compute-0 podman[85652]: 2025-12-06 09:40:05.132894323 +0000 UTC m=+0.629315048 container died 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-50eaec8881d8b4c56fd23c269018e6b3f346348fcb0fb0d357fcdc9cb671a19f-merged.mount: Deactivated successfully.
Dec 06 09:40:05 compute-0 podman[85652]: 2025-12-06 09:40:05.170859864 +0000 UTC m=+0.667280609 container remove 3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269 (image=quay.io/ceph/ceph:v19, name=thirsty_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:05 compute-0 systemd[1]: libpod-conmon-3dfd9c78e8afa1ce937cd6f75b12fd1adade452150f363ba8e5a63b55bd92269.scope: Deactivated successfully.
Dec 06 09:40:05 compute-0 sudo[85649]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:40:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:05 compute-0 sudo[85727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vapzexgxqkagrifuerurdpuykwphtgwo ; /usr/bin/python3'
Dec 06 09:40:05 compute-0 sudo[85727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:05 compute-0 python3[85729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:05 compute-0 podman[85730]: 2025-12-06 09:40:05.554815844 +0000 UTC m=+0.050814098 container create 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:40:05 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:40:05 compute-0 systemd[1]: Started libpod-conmon-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope.
Dec 06 09:40:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:05 compute-0 podman[85730]: 2025-12-06 09:40:05.52909269 +0000 UTC m=+0.025090954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:05 compute-0 podman[85730]: 2025-12-06 09:40:05.847335261 +0000 UTC m=+0.343333595 container init 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:40:05 compute-0 podman[85730]: 2025-12-06 09:40:05.858795292 +0000 UTC m=+0.354793526 container start 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:05 compute-0 podman[85730]: 2025-12-06 09:40:05.86336569 +0000 UTC m=+0.359363924 container attach 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec 06 09:40:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24 pruub=8.415267944s) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active pruub 36.849887848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24 pruub=8.415267944s) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown pruub 36.849887848s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:06 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:40:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3975532761' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 06 09:40:06 compute-0 ceph-mon[74327]: osdmap e23: 2 total, 2 up, 2 in
Dec 06 09:40:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: pgmap v77: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 69 pgs: 64 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3))
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 06 09:40:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 06 09:40:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 06 09:40:07 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: osdmap e24: 2 total, 2 up, 2 in
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:07 compute-0 ceph-mon[74327]: pgmap v79: 69 pgs: 64 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:07 compute-0 ceph-mon[74327]: Deploying daemon mon.compute-2 on compute-2
Dec 06 09:40:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec 06 09:40:07 compute-0 silly_mclaren[85743]: enabled application 'rbd' on pool 'volumes'
Dec 06 09:40:07 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 7a9f3ae5-48bb-431a-9693-7f43cfabedf9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 77f96f84-04b9-4f8b-a569-a0f337be7483 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 1f91fb4a-84b8-4b07-86b9-ad6cf512b1c6 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 06 09:40:07 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 83190a00-1b74-48c3-91b4-335b8d313526 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=13/14 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.8( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.2( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.0( empty local-lis/les=24/25 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.7( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.3( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.11( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.14( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.16( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.1a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.17( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 25 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=13/13 les/c/f=14/14/0 sis=24) [1] r=0 lpr=24 pi=[13,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:07 compute-0 systemd[1]: libpod-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope: Deactivated successfully.
Dec 06 09:40:07 compute-0 podman[85730]: 2025-12-06 09:40:07.167464679 +0000 UTC m=+1.663462933 container died 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ce4206e6813446cd2c8681b83047b28f40eeed2615c63236032567971dccf90-merged.mount: Deactivated successfully.
Dec 06 09:40:07 compute-0 podman[85730]: 2025-12-06 09:40:07.204088417 +0000 UTC m=+1.700086661 container remove 04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb (image=quay.io/ceph/ceph:v19, name=silly_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 09:40:07 compute-0 systemd[1]: libpod-conmon-04c91770c53bc1b55e38039c004d13921792ffb35dda0cba0c99449ce07f01bb.scope: Deactivated successfully.
Dec 06 09:40:07 compute-0 sudo[85727]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:07 compute-0 sudo[85803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrpjompufspuqlmtldcmltclakprnahi ; /usr/bin/python3'
Dec 06 09:40:07 compute-0 sudo[85803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:07 compute-0 python3[85805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:07 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 06 09:40:07 compute-0 podman[85806]: 2025-12-06 09:40:07.597359349 +0000 UTC m=+0.062933631 container create b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 09:40:07 compute-0 systemd[1]: Started libpod-conmon-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope.
Dec 06 09:40:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:07 compute-0 podman[85806]: 2025-12-06 09:40:07.579611434 +0000 UTC m=+0.045185756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:07 compute-0 podman[85806]: 2025-12-06 09:40:07.67724104 +0000 UTC m=+0.142815412 container init b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:07 compute-0 podman[85806]: 2025-12-06 09:40:07.6858707 +0000 UTC m=+0.151444982 container start b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:07 compute-0 podman[85806]: 2025-12-06 09:40:07.68991142 +0000 UTC m=+0.155485802 container attach b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 06 09:40:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 06 09:40:08 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 6 completed events
Dec 06 09:40:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:40:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:08 compute-0 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,95 pgs not in active + clean state
Dec 06 09:40:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 06 09:40:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 06 09:40:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 06 09:40:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 06 09:40:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec 06 09:40:08 compute-0 bold_cori[85821]: enabled application 'rbd' on pool 'backups'
Dec 06 09:40:08 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec 06 09:40:08 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:40:08 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2735601092' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 06 09:40:08 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:08 compute-0 ceph-mon[74327]: osdmap e25: 2 total, 2 up, 2 in
Dec 06 09:40:08 compute-0 ceph-mon[74327]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 06 09:40:08 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:08 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 06 09:40:08 compute-0 systemd[1]: libpod-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope: Deactivated successfully.
Dec 06 09:40:08 compute-0 podman[85806]: 2025-12-06 09:40:08.168680016 +0000 UTC m=+0.634254308 container died b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3150ff693cb8633c70c6e4a92bd6b972a44ab3ac5ebf4d4a0c0527ade6af70a7-merged.mount: Deactivated successfully.
Dec 06 09:40:08 compute-0 podman[85806]: 2025-12-06 09:40:08.206352488 +0000 UTC m=+0.671926770 container remove b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f (image=quay.io/ceph/ceph:v19, name=bold_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:40:08 compute-0 systemd[1]: libpod-conmon-b201c6401fe231d1132a61c31d566511e6e1d186f07602cef560b61a1ff2f31f.scope: Deactivated successfully.
Dec 06 09:40:08 compute-0 sudo[85803]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:08 compute-0 sudo[85882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmlmzlxwsxkalzyzltniionjkntlqvj ; /usr/bin/python3'
Dec 06 09:40:08 compute-0 sudo[85882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:08 compute-0 python3[85884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:40:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:08 compute-0 podman[85885]: 2025-12-06 09:40:08.557391321 +0000 UTC m=+0.021963834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:08 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 06 09:40:08 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 06 09:40:09 compute-0 podman[85885]: 2025-12-06 09:40:09.082876212 +0000 UTC m=+0.547448735 container create 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:40:09 compute-0 systemd[75653]: Starting Mark boot as successful...
Dec 06 09:40:09 compute-0 systemd[75653]: Finished Mark boot as successful.
Dec 06 09:40:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 06 09:40:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:09 compute-0 systemd[1]: Started libpod-conmon-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope.
Dec 06 09:40:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:40:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec 06 09:40:09 compute-0 podman[85885]: 2025-12-06 09:40:09.83714012 +0000 UTC m=+1.301712663 container init 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:40:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec 06 09:40:09 compute-0 podman[85885]: 2025-12-06 09:40:09.843807116 +0000 UTC m=+1.308379619 container start 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:40:09 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 06 09:40:09 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 06 09:40:09 compute-0 podman[85885]: 2025-12-06 09:40:09.987458104 +0000 UTC m=+1.452030597 container attach 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: 2.1d scrub starts
Dec 06 09:40:10 compute-0 ceph-mon[74327]: 2.1d scrub ok
Dec 06 09:40:10 compute-0 ceph-mon[74327]: 3.18 scrub starts
Dec 06 09:40:10 compute-0 ceph-mon[74327]: 3.18 scrub ok
Dec 06 09:40:10 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/250124401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 06 09:40:10 compute-0 ceph-mon[74327]: osdmap e26: 2 total, 2 up, 2 in
Dec 06 09:40:10 compute-0 ceph-mon[74327]: pgmap v82: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:10 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:10 compute-0 ceph-mon[74327]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 06 09:40:10 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:10 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 06 09:40:11 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 06 09:40:11 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:11 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:11 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:11 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:11 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 06 09:40:11 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 06 09:40:12 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:12 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:12 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:12 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:12 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:12 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:12 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:13 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 06 09:40:13 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 06 09:40:13 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:13 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:13 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:13 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:13 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:13 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:14 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Dec 06 09:40:14 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Dec 06 09:40:14 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:14 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:14 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v86: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:14 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:14 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:14 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:14 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:15 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 06 09:40:15 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 06 09:40:15 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:15 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:15 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:15 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:15 compute-0 ceph-mon[74327]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 06 09:40:15 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:15 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:15 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:15 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:15 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:16 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 06 09:40:16 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 06 09:40:16 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:16 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:16 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:16 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:16 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:16 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:16 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:17 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 06 09:40:17 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 06 09:40:17 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:17 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:17 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:17 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:17 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:17 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:17 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:17 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:18 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 06 09:40:18 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 06 09:40:18 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:18 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 09:40:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:18 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:18 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:18 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:40:10.449868+0000
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 06 09:40:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Dec 06 09:40:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:19 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 06 09:40:19 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec 06 09:40:19 compute-0 pensive_payne[85901]: enabled application 'rbd' on pool 'images'
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1f scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1f scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.16 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.16 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: Deploying daemon mon.compute-1 on compute-1
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0 calling monitor election
Dec 06 09:40:19 compute-0 ceph-mon[74327]: pgmap v84: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.8 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.8 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.14 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.14 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1b scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1b scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.15 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.15 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: pgmap v85: 131 pgs: 31 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.17 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.9 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.17 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.9 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.12 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.12 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.a deep-scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.a deep-scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: pgmap v86: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.11 deep-scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.11 deep-scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1e scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.1e scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.10 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.10 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.2 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.2 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: pgmap v87: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.13 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.13 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.6 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.6 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 3.f scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.5 scrub starts
Dec 06 09:40:19 compute-0 ceph-mon[74327]: 2.5 scrub ok
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: pgmap v88: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:19 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: monmap epoch 2
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec 06 09:40:19 compute-0 systemd[1]: libpod-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope: Deactivated successfully.
Dec 06 09:40:19 compute-0 podman[85885]: 2025-12-06 09:40:19.249657634 +0000 UTC m=+10.714230137 container died 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3022612511; not ready for session (expect reconnect)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ccd2f50842bd7928969aa56bc15b6dc8f9512af1bb94519f4816e3b58c66e1-merged.mount: Deactivated successfully.
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3))
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event f9d57ea0-0593-4d0f-83be-a56b20ce3d10 (Updating mon deployment (+2 -> 3)) in 13 seconds
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3))
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:40:19 compute-0 podman[85885]: 2025-12-06 09:40:19.895857248 +0000 UTC m=+11.360429741 container remove 0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e (image=quay.io/ceph/ceph:v19, name=pensive_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.oazbvn on compute-2
Dec 06 09:40:19 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.oazbvn on compute-2
Dec 06 09:40:19 compute-0 systemd[1]: libpod-conmon-0ef8b54d7bdf0135fbfcb826715eadab703918ac7a5ae60fe8d2cb08566ea36e.scope: Deactivated successfully.
Dec 06 09:40:19 compute-0 sudo[85882]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:20 compute-0 sudo[85961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggdteisvrfuqrtgpypgdhmcsdkbolhz ; /usr/bin/python3'
Dec 06 09:40:20 compute-0 sudo[85961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:20 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Dec 06 09:40:20 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Dec 06 09:40:20 compute-0 python3[85963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:20 compute-0 podman[85964]: 2025-12-06 09:40:20.297208294 +0000 UTC m=+0.049379103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:20 compute-0 podman[85964]: 2025-12-06 09:40:20.446705572 +0000 UTC m=+0.198876401 container create 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-2 calling monitor election
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 3.f scrub ok
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 3.e scrub starts
Dec 06 09:40:20 compute-0 ceph-mon[74327]: fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:40:20 compute-0 ceph-mon[74327]: last_changed 2025-12-06T09:40:10.449868+0000
Dec 06 09:40:20 compute-0 ceph-mon[74327]: created 2025-12-06T09:37:38.663870+0000
Dec 06 09:40:20 compute-0 ceph-mon[74327]: min_mon_release 19 (squid)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: election_strategy: 1
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 06 09:40:20 compute-0 ceph-mon[74327]: fsmap 
Dec 06 09:40:20 compute-0 ceph-mon[74327]: osdmap e27: 2 total, 2 up, 2 in
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec 06 09:40:20 compute-0 ceph-mon[74327]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec 06 09:40:20 compute-0 ceph-mon[74327]:     application not enabled on pool 'images'
Dec 06 09:40:20 compute-0 ceph-mon[74327]:     application not enabled on pool 'cephfs.cephfs.meta'
Dec 06 09:40:20 compute-0 ceph-mon[74327]:     application not enabled on pool 'cephfs.cephfs.data'
Dec 06 09:40:20 compute-0 ceph-mon[74327]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 3.e scrub ok
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 2.0 scrub starts
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 2.0 scrub ok
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3524701111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 06 09:40:20 compute-0 ceph-mon[74327]: osdmap e28: 2 total, 2 up, 2 in
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 3.d scrub starts
Dec 06 09:40:20 compute-0 ceph-mon[74327]: 3.d scrub ok
Dec 06 09:40:20 compute-0 ceph-mgr[74618]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec 06 09:40:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:40:20.454+0000 7f8d54bf6640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec 06 09:40:20 compute-0 systemd[1]: Started libpod-conmon-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope.
Dec 06 09:40:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:20 compute-0 podman[85964]: 2025-12-06 09:40:20.59374977 +0000 UTC m=+0.345920569 container init 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:20 compute-0 podman[85964]: 2025-12-06 09:40:20.600026864 +0000 UTC m=+0.352197673 container start 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:40:20 compute-0 podman[85964]: 2025-12-06 09:40:20.604492378 +0000 UTC m=+0.356663167 container attach 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 06 09:40:20 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:40:20 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:20 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 09:40:20 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:20 compute-0 ceph-mon[74327]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 06 09:40:20 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:21 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 06 09:40:21 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 06 09:40:21 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 06 09:40:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 06 09:40:21 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:21 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:21 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:21 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:21 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:21 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:22 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:22 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 06 09:40:22 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 06 09:40:22 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:22 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:22 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:22 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:22 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:23 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 7 completed events
Dec 06 09:40:23 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:40:23 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 06 09:40:23 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 06 09:40:23 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:23 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:23 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:23 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:24 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 06 09:40:24 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 06 09:40:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:24 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:24 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:24 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:25 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:25 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 06 09:40:25 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 06 09:40:25 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 09:40:25 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:25 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:25 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:25 compute-0 ceph-mon[74327]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 06 09:40:25 compute-0 ceph-mon[74327]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T09:40:20.714037+0000
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : created 2025-12-06T09:37:38.663870+0000
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 06 09:40:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Dec 06 09:40:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:26 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Dec 06 09:40:26 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Dec 06 09:40:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v93: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec 06 09:40:26 compute-0 friendly_solomon[85980]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.b deep-scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0 calling monitor election
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.b deep-scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-2 calling monitor election
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.3 scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.3 scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.7 deep-scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.7 deep-scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.b scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.b scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: pgmap v91: 131 pgs: 1 peering, 31 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.a deep-scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.a deep-scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.c scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.c scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.0 scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.0 scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.4 scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.4 scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.c scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 3.c scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.1 scrub starts
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2.1 scrub ok
Dec 06 09:40:26 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: monmap epoch 3
Dec 06 09:40:26 compute-0 ceph-mon[74327]: fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:40:26 compute-0 ceph-mon[74327]: last_changed 2025-12-06T09:40:20.714037+0000
Dec 06 09:40:26 compute-0 ceph-mon[74327]: created 2025-12-06T09:37:38.663870+0000
Dec 06 09:40:26 compute-0 ceph-mon[74327]: min_mon_release 19 (squid)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: election_strategy: 1
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 06 09:40:26 compute-0 ceph-mon[74327]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 06 09:40:26 compute-0 ceph-mon[74327]: fsmap 
Dec 06 09:40:26 compute-0 ceph-mon[74327]: osdmap e28: 2 total, 2 up, 2 in
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mgrmap e9: compute-0.qhdjwa(active, since 2m)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec 06 09:40:26 compute-0 ceph-mon[74327]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec 06 09:40:26 compute-0 ceph-mon[74327]:     application not enabled on pool 'images'
Dec 06 09:40:26 compute-0 ceph-mon[74327]:     application not enabled on pool 'cephfs.cephfs.meta'
Dec 06 09:40:26 compute-0 ceph-mon[74327]:     application not enabled on pool 'cephfs.cephfs.data'
Dec 06 09:40:26 compute-0 ceph-mon[74327]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec 06 09:40:26 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event f21b6bbe-67b9-44d9-b566-03cc3ef21868 (Global Recovery Event) in 19 seconds
Dec 06 09:40:26 compute-0 systemd[1]: libpod-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope: Deactivated successfully.
Dec 06 09:40:26 compute-0 podman[85964]: 2025-12-06 09:40:26.589624301 +0000 UTC m=+6.341795100 container died 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552670479s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.480159760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552635193s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.480159760s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552288055s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479991913s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.552268028s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479991913s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551779747s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.480148315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551755905s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.480148315s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551189423s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479732513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.551174164s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479732513s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550840378s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479709625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550812721s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479709625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550696373s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479644775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550770760s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479736328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550655365s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479644775s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550718307s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479736328s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550441742s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479530334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550426483s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479530334s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550407410s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479534149s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550385475s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479534149s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550285339s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479450226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550263405s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479450226s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550206184s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479431152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550192833s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479431152s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550135612s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479404449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550116539s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479404449s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550026894s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479393005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550013542s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479393005s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550060272s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479457855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550036430s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479457855s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550024033s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 61.479457855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:40:26 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=12.550000191s) [0] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.479457855s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fbd7bbef96ba7d16884585bd199c0a8c583e9b1ed13a478a9c55a01936fcda6-merged.mount: Deactivated successfully.
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:26 compute-0 podman[85964]: 2025-12-06 09:40:26.632286054 +0000 UTC m=+6.384456853 container remove 77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f (image=quay.io/ceph/ceph:v19, name=friendly_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:40:26 compute-0 systemd[1]: libpod-conmon-77436a859446ba22d44418bd103c3ae6bda9e32d3c4ae0a566de509b1f7e6e7f.scope: Deactivated successfully.
Dec 06 09:40:26 compute-0 sudo[85961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:26 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2949692182; not ready for session (expect reconnect)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:26 compute-0 sudo[86040]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtlxloqxdxfxphdeaqmwelrrdgvjzjm ; /usr/bin/python3'
Dec 06 09:40:26 compute-0 sudo[86040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:26 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.sauzid on compute-1
Dec 06 09:40:26 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.sauzid on compute-1
Dec 06 09:40:26 compute-0 python3[86042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.0198014 +0000 UTC m=+0.052717170 container create 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:27 compute-0 systemd[1]: Started libpod-conmon-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope.
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:26.998993446 +0000 UTC m=+0.031909246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.116397033 +0000 UTC m=+0.149312823 container init 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.127034238 +0000 UTC m=+0.159950008 container start 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.13016292 +0000 UTC m=+0.163078690 container attach 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:27 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 06 09:40:27 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 06 09:40:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 06 09:40:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:40:27.718+0000 7f8d54bf6640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec 06 09:40:27 compute-0 ceph-mgr[74618]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec 06 09:40:27 compute-0 wonderful_ride[86060]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 06 09:40:27 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[25,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:40:27 compute-0 ceph-mon[74327]: mon.compute-1 calling monitor election
Dec 06 09:40:27 compute-0 ceph-mon[74327]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 09:40:27 compute-0 ceph-mon[74327]: 3.4 scrub starts
Dec 06 09:40:27 compute-0 ceph-mon[74327]: 3.4 scrub ok
Dec 06 09:40:27 compute-0 ceph-mon[74327]: 2.e deep-scrub starts
Dec 06 09:40:27 compute-0 ceph-mon[74327]: 2.e deep-scrub ok
Dec 06 09:40:27 compute-0 ceph-mon[74327]: pgmap v93: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1898003818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:27 compute-0 ceph-mon[74327]: osdmap e29: 2 total, 2 up, 2 in
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.sauzid", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 06 09:40:27 compute-0 systemd[1]: libpod-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope: Deactivated successfully.
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.869567917 +0000 UTC m=+0.902483727 container died 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-86701212fc2b0f61d2e1f69c2c47b98f2c6c9a1f9326e5e5273f8ddc75b23396-merged.mount: Deactivated successfully.
Dec 06 09:40:27 compute-0 podman[86043]: 2025-12-06 09:40:27.926351948 +0000 UTC m=+0.959267738 container remove 34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b (image=quay.io/ceph/ceph:v19, name=wonderful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:40:27 compute-0 systemd[1]: libpod-conmon-34e53637c6a0dec083863a5f82d7989b0cf339a31e74656f5786e4d805ad3a9b.scope: Deactivated successfully.
Dec 06 09:40:27 compute-0 sudo[86040]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:28 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 06 09:40:28 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 06 09:40:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v96: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 06 09:40:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 09:40:28 compute-0 ceph-mon[74327]: Deploying daemon mgr.compute-1.sauzid on compute-1
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 4.1e scrub starts
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 4.1e scrub ok
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 2.1a scrub starts
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 2.1a scrub ok
Dec 06 09:40:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:28 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:40:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/21529314' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 06 09:40:28 compute-0 ceph-mon[74327]: osdmap e30: 2 total, 2 up, 2 in
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 2.18 scrub starts
Dec 06 09:40:28 compute-0 ceph-mon[74327]: 2.18 scrub ok
Dec 06 09:40:28 compute-0 ceph-mon[74327]: pgmap v96: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:28 compute-0 python3[86171]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:40:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:40:29 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 06 09:40:29 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 06 09:40:29 compute-0 python3[86242]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014028.6305497-37195-124427659970167/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:40:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:29 compute-0 sudo[86342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihcsgpsznpwedcxayrlmeewrcpleaotb ; /usr/bin/python3'
Dec 06 09:40:29 compute-0 sudo[86342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:29 compute-0 python3[86344]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:40:30 compute-0 sudo[86342]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:40:30 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 06 09:40:30 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 06 09:40:30 compute-0 sudo[86417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnkzurouqkkypyafblwhtlnxnrmlmmj ; /usr/bin/python3'
Dec 06 09:40:30 compute-0 sudo[86417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:30 compute-0 python3[86419]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014029.686952-37209-84785446435248/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=31fd15111dbd1a80f398078c01d166287a76fc4d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:30 compute-0 sudo[86417]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v97: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:30 compute-0 sudo[86467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyigtxlmzgfyhmwmvjrgpopeidswbweh ; /usr/bin/python3'
Dec 06 09:40:30 compute-0 sudo[86467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:30 compute-0 python3[86469]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:30 compute-0 podman[86470]: 2025-12-06 09:40:30.908221852 +0000 UTC m=+0.039591235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:31 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 06 09:40:31 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 06 09:40:31 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 8 completed events
Dec 06 09:40:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:40:31 compute-0 podman[86470]: 2025-12-06 09:40:31.764232671 +0000 UTC m=+0.895601994 container create 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Dec 06 09:40:32 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-2.oazbvn 192.168.122.102:0/242837708; not ready for session (expect reconnect)
Dec 06 09:40:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:32 compute-0 systemd[1]: Started libpod-conmon-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope.
Dec 06 09:40:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:40:33 compute-0 podman[86470]: 2025-12-06 09:40:33.076515994 +0000 UTC m=+2.207885367 container init 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:40:33 compute-0 podman[86470]: 2025-12-06 09:40:33.090109995 +0000 UTC m=+2.221479328 container start 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:33 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 06 09:40:33 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-2.oazbvn 192.168.122.102:0/242837708; not ready for session (expect reconnect)
Dec 06 09:40:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 06 09:40:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:40:33 compute-0 podman[86470]: 2025-12-06 09:40:33.650922861 +0000 UTC m=+2.782292204 container attach 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:40:33 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 06 09:40:33 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec 06 09:40:33 compute-0 ceph-mon[74327]: 5.1e scrub starts
Dec 06 09:40:33 compute-0 ceph-mon[74327]: 5.1e scrub ok
Dec 06 09:40:33 compute-0 ceph-mon[74327]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 06 09:40:33 compute-0 ceph-mon[74327]: Cluster is now healthy
Dec 06 09:40:33 compute-0 ceph-mon[74327]: 2.17 scrub starts
Dec 06 09:40:33 compute-0 ceph-mon[74327]: 2.17 scrub ok
Dec 06 09:40:33 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3))
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 37639f43-4dce-4807-8a49-da327e3558b8 (Updating mgr deployment (+2 -> 3)) in 14 seconds
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.qhdjwa(active, since 2m), standbys: compute-2.oazbvn
Dec 06 09:40:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:40:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 09:40:34 compute-0 interesting_roentgen[86485]: 
Dec 06 09:40:34 compute-0 interesting_roentgen[86485]: [global]
Dec 06 09:40:34 compute-0 interesting_roentgen[86485]:         fsid = 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:40:34 compute-0 interesting_roentgen[86485]:         mon_host = 192.168.122.100
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3))
Dec 06 09:40:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:40:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 06 09:40:34 compute-0 systemd[1]: libpod-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope: Deactivated successfully.
Dec 06 09:40:34 compute-0 podman[86470]: 2025-12-06 09:40:34.206143957 +0000 UTC m=+3.337513290 container died 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 06 09:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d9025365e368256040817794260774b0fa90f66f568331acc8f5309ffdb81f9-merged.mount: Deactivated successfully.
Dec 06 09:40:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 06 09:40:34 compute-0 podman[86470]: 2025-12-06 09:40:34.251328102 +0000 UTC m=+3.382697395 container remove 66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c (image=quay.io/ceph/ceph:v19, name=interesting_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:34 compute-0 systemd[1]: libpod-conmon-66fa10c0dd3d5abb0bf4da6fdcce5278050ff4d85bf12ac7ae1f31a11f28b06c.scope: Deactivated successfully.
Dec 06 09:40:34 compute-0 sudo[86467]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:34 compute-0 sudo[86545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocdqajujryvpojundbnpvegoytrcynyz ; /usr/bin/python3'
Dec 06 09:40:34 compute-0 sudo[86545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:34 compute-0 python3[86547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:34 compute-0 podman[86548]: 2025-12-06 09:40:34.704511677 +0000 UTC m=+0.038582032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:35 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 06 09:40:35 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 06 09:40:36 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 06 09:40:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:36 compute-0 podman[86548]: 2025-12-06 09:40:36.775156534 +0000 UTC m=+2.109226879 container create b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:40:37 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 06 09:40:37 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Dec 06 09:40:37 compute-0 systemd[1]: Started libpod-conmon-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope.
Dec 06 09:40:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Dec 06 09:40:38 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec 06 09:40:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 06 09:40:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v101: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.10 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.10 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.11 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.11 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.16 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.16 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: pgmap v97: 131 pgs: 47 peering, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 5.13 deep-scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 5.13 deep-scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.14 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.14 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.12 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 4.12 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.12 deep-scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.12 deep-scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 5.12 deep-scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 5.12 deep-scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.11 scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.11 scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn started
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:38 compute-0 ceph-mon[74327]: mgrmap e10: compute-0.qhdjwa(active, since 2m), standbys: compute-2.oazbvn
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2318794964' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 09:40:38 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:38 compute-0 ceph-mon[74327]: Deploying daemon crash.compute-2 on compute-2
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.f scrub starts
Dec 06 09:40:38 compute-0 ceph-mon[74327]: 2.f scrub ok
Dec 06 09:40:38 compute-0 ceph-mon[74327]: pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:39 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec 06 09:40:39 compute-0 podman[86548]: 2025-12-06 09:40:39.23039136 +0000 UTC m=+4.564461705 container init b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:40:39 compute-0 podman[86548]: 2025-12-06 09:40:39.242037568 +0000 UTC m=+4.576107913 container start b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:40:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 06 09:40:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 06 09:40:40 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec 06 09:40:40 compute-0 podman[86548]: 2025-12-06 09:40:40.229815579 +0000 UTC m=+5.563885924 container attach b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:40 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec 06 09:40:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 06 09:40:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 06 09:40:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 06 09:40:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 06 09:40:41 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from mgr.compute-1.sauzid 192.168.122.101:0/1218376604; not ready for session (expect reconnect)
Dec 06 09:40:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.14 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.14 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.14 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.14 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.1f scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.1f scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.17 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.17 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.1f scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.16 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.16 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.1f scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.10 deep-scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.10 deep-scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.13 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.17 deep-scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 4.17 deep-scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: pgmap v101: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.8 scrub starts
Dec 06 09:40:41 compute-0 ceph-mon[74327]: 5.8 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid started
Dec 06 09:40:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 06 09:40:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940510154' entity='client.admin' 
Dec 06 09:40:41 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:40:41 compute-0 stupefied_black[86563]: set ssl_option
Dec 06 09:40:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec 06 09:40:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:40:41 compute-0 systemd[1]: libpod-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope: Deactivated successfully.
Dec 06 09:40:41 compute-0 podman[86548]: 2025-12-06 09:40:41.956998827 +0000 UTC m=+7.291069172 container died b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0196037e9fd97df5b2cf3a74ec04d08922bf680c57cd46d27d833c6035ef6bd7-merged.mount: Deactivated successfully.
Dec 06 09:40:42 compute-0 podman[86548]: 2025-12-06 09:40:42.000005792 +0000 UTC m=+7.334076117 container remove b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602 (image=quay.io/ceph/ceph:v19, name=stupefied_black, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:42 compute-0 sudo[86545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:42 compute-0 systemd[1]: libpod-conmon-b0b2c7b280d1e1e8dce7bdfa48ef246e4d021187b83de9e1b7d94e565dcae602.scope: Deactivated successfully.
Dec 06 09:40:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:42 compute-0 sudo[86622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjhhrhbbkwjyyvwyemypumhvvsqjigh ; /usr/bin/python3'
Dec 06 09:40:42 compute-0 sudo[86622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:40:42 compute-0 python3[86624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:42 compute-0 podman[86625]: 2025-12-06 09:40:42.4402982 +0000 UTC m=+0.042279242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:42 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3))
Dec 06 09:40:42 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 5bba6670-7dce-4123-9ffd-b3a9f0458b17 (Updating crash deployment (+1 -> 3)) in 9 seconds
Dec 06 09:40:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 06 09:40:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 06 09:40:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 06 09:40:43 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 06 09:40:44 compute-0 podman[86625]: 2025-12-06 09:40:44.105753086 +0000 UTC m=+1.707734098 container create f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:44 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 10 completed events
Dec 06 09:40:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:40:44 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 06 09:40:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:44 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:40:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:45 compute-0 systemd[1]: Started libpod-conmon-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope.
Dec 06 09:40:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:45 compute-0 sudo[86640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:40:45 compute-0 sudo[86640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:45 compute-0 podman[86625]: 2025-12-06 09:40:45.188362302 +0000 UTC m=+2.790343404 container init f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:45 compute-0 sudo[86640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:45 compute-0 podman[86625]: 2025-12-06 09:40:45.196147815 +0000 UTC m=+2.798128837 container start f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:45 compute-0 podman[86625]: 2025-12-06 09:40:45.199833775 +0000 UTC m=+2.801814837 container attach f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:40:45 compute-0 sudo[86669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:40:45 compute-0 sudo[86669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 4.13 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.15 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.15 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.a scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.a scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: pgmap v102: 131 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 129 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.11 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.11 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 4.b scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 4.b scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.16 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.16 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940510154' entity='client.admin' 
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mgrmap e11: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:40:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:40:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.d scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.d scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mon[74327]: pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:45 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 4.9 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 4.9 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.c scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: 5.c scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:40:45 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:40:45 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.604080543 +0000 UTC m=+0.021876470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.848370995 +0000 UTC m=+0.266166932 container create a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:40:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 06 09:40:45 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 06 09:40:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 06 09:40:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:40:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:45 compute-0 sharp_jackson[86643]: Scheduled rgw.rgw update...
Dec 06 09:40:45 compute-0 sharp_jackson[86643]: Scheduled ingress.rgw.default update...
Dec 06 09:40:45 compute-0 systemd[1]: Started libpod-conmon-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope.
Dec 06 09:40:45 compute-0 podman[86625]: 2025-12-06 09:40:45.904136353 +0000 UTC m=+3.506117435 container died f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec 06 09:40:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:45 compute-0 systemd[1]: libpod-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope: Deactivated successfully.
Dec 06 09:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb8213fc13aac8e0369b14faca6578236da723c08a33eb5f106bf60431e2da3d-merged.mount: Deactivated successfully.
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.941509855 +0000 UTC m=+0.359305802 container init a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.947630594 +0000 UTC m=+0.365426541 container start a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:45 compute-0 charming_wiles[86770]: 167 167
Dec 06 09:40:45 compute-0 systemd[1]: libpod-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope: Deactivated successfully.
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.960346345 +0000 UTC m=+0.378142272 container attach a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:45 compute-0 podman[86753]: 2025-12-06 09:40:45.960847402 +0000 UTC m=+0.378643319 container died a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:40:45 compute-0 podman[86625]: 2025-12-06 09:40:45.979674972 +0000 UTC m=+3.581655984 container remove f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62 (image=quay.io/ceph/ceph:v19, name=sharp_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ded9492e258cff351827ceaa600f8636e068d8ba251fd16408ac5b8f0e3c105e-merged.mount: Deactivated successfully.
Dec 06 09:40:46 compute-0 sudo[86622]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:46 compute-0 podman[86753]: 2025-12-06 09:40:46.006442901 +0000 UTC m=+0.424238818 container remove a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wiles, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:46 compute-0 systemd[1]: libpod-conmon-a4aa0e0c81fc983452c68dac5012a5f99fd323e434b1839a79e0f061ec729869.scope: Deactivated successfully.
Dec 06 09:40:46 compute-0 systemd[1]: libpod-conmon-f7efe82b8e25a71e8c6712bb13e42535b4aafd5b7eabda20f064b74e250a1d62.scope: Deactivated successfully.
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.153339864 +0000 UTC m=+0.039818342 container create 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:40:46 compute-0 systemd[1]: Started libpod-conmon-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope.
Dec 06 09:40:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.225455953 +0000 UTC m=+0.111934441 container init 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.137325385 +0000 UTC m=+0.023803883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.233558675 +0000 UTC m=+0.120037143 container start 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.236865792 +0000 UTC m=+0.123344270 container attach 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:46 compute-0 python3[86901]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:40:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:46 compute-0 determined_visvesvaraya[86857]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:40:46 compute-0 determined_visvesvaraya[86857]: --> All data devices are unavailable
Dec 06 09:40:46 compute-0 systemd[1]: libpod-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope: Deactivated successfully.
Dec 06 09:40:46 compute-0 podman[86804]: 2025-12-06 09:40:46.615628925 +0000 UTC m=+0.502107463 container died 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:46 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 06 09:40:46 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 06 09:40:47 compute-0 python3[86992]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014046.16402-37228-257896791076356/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 4.15 scrub starts
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 5.6 deep-scrub starts
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 5.6 deep-scrub ok
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 4.15 scrub ok
Dec 06 09:40:47 compute-0 ceph-mon[74327]: pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 5.9 scrub starts
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 5.9 scrub ok
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 4.7 scrub starts
Dec 06 09:40:47 compute-0 ceph-mon[74327]: 4.7 scrub ok
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:40:47 compute-0 ceph-mon[74327]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:47 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-18f7cb6e5840602599acab0e715b5b85b778037f99ccbed5af4f40315d8d2571-merged.mount: Deactivated successfully.
Dec 06 09:40:47 compute-0 podman[86804]: 2025-12-06 09:40:47.805372455 +0000 UTC m=+1.691850943 container remove 39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:47 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 06 09:40:47 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 06 09:40:47 compute-0 sudo[86669]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:47 compute-0 systemd[1]: libpod-conmon-39da0c439a5d2d2ac162259c3982734778c1a538d43fda4674b4fad4aafae326.scope: Deactivated successfully.
Dec 06 09:40:47 compute-0 sudo[87018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:40:47 compute-0 sudo[87018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:47 compute-0 sudo[87018]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:47 compute-0 sudo[87043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:40:47 compute-0 sudo[87043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:48 compute-0 sudo[87098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdkfrvdjuhgkjwvszfdgozjjuueytth ; /usr/bin/python3'
Dec 06 09:40:48 compute-0 sudo[87098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:48 compute-0 python3[87107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.32637056 +0000 UTC m=+0.042525560 container create af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:48 compute-0 systemd[1]: Started libpod-conmon-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope.
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.373825399 +0000 UTC m=+0.043776691 container create be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 09:40:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:48 compute-0 systemd[1]: Started libpod-conmon-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope.
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.400739932 +0000 UTC m=+0.116894922 container init af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.30537589 +0000 UTC m=+0.021530890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.406573011 +0000 UTC m=+0.122727981 container start af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.409838677 +0000 UTC m=+0.125993657 container attach af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:48 compute-0 serene_germain[87160]: 167 167
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.412060179 +0000 UTC m=+0.128215159 container died af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:40:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:48 compute-0 systemd[1]: libpod-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope: Deactivated successfully.
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.448919524 +0000 UTC m=+0.118870846 container init be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.354660407 +0000 UTC m=+0.024611749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a01a01761defafc008b21797370d6d33091f1cc36e69bd4875225bf814724f-merged.mount: Deactivated successfully.
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.455931651 +0000 UTC m=+0.125882963 container start be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:40:48 compute-0 podman[87130]: 2025-12-06 09:40:48.478211944 +0000 UTC m=+0.194366934 container remove af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:48 compute-0 systemd[1]: libpod-conmon-af7ca7ba95f4499315792a083e9c62b91e73a1517e2dd1224327248643a0a937.scope: Deactivated successfully.
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.495191544 +0000 UTC m=+0.165142846 container attach be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v106: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:48 compute-0 podman[87209]: 2025-12-06 09:40:48.673670422 +0000 UTC m=+0.045894929 container create 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:40:48 compute-0 systemd[1]: Started libpod-conmon-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope.
Dec 06 09:40:48 compute-0 podman[87209]: 2025-12-06 09:40:48.65512134 +0000 UTC m=+0.027345887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:48 compute-0 podman[87209]: 2025-12-06 09:40:48.774012156 +0000 UTC m=+0.146236683 container init 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.8 scrub starts
Dec 06 09:40:48 compute-0 ceph-mon[74327]: Saving service ingress.rgw.default spec with placement count:2
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.8 scrub ok
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 5.b scrub starts
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 5.b scrub ok
Dec 06 09:40:48 compute-0 ceph-mon[74327]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.a scrub starts
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.a scrub ok
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.0 scrub starts
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.0 scrub ok
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.d scrub starts
Dec 06 09:40:48 compute-0 ceph-mon[74327]: 4.d scrub ok
Dec 06 09:40:48 compute-0 ceph-mon[74327]: pgmap v106: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:48 compute-0 podman[87209]: 2025-12-06 09:40:48.78092641 +0000 UTC m=+0.153150907 container start 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:48 compute-0 podman[87209]: 2025-12-06 09:40:48.785342114 +0000 UTC m=+0.157566651 container attach 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:48 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 06 09:40:48 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec 06 09:40:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 06 09:40:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 06 09:40:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 06 09:40:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec 06 09:40:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 06 09:40:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:48 compute-0 eager_fermat[87165]: Scheduled node-exporter update...
Dec 06 09:40:48 compute-0 eager_fermat[87165]: Scheduled grafana update...
Dec 06 09:40:48 compute-0 eager_fermat[87165]: Scheduled prometheus update...
Dec 06 09:40:48 compute-0 eager_fermat[87165]: Scheduled alertmanager update...
Dec 06 09:40:48 compute-0 systemd[1]: libpod-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope: Deactivated successfully.
Dec 06 09:40:48 compute-0 podman[87144]: 2025-12-06 09:40:48.985838255 +0000 UTC m=+0.655789577 container died be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"} v 0)
Dec 06 09:40:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 06 09:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bea536cd35e11a73088c1194886e1086c4d9b5c294098f50cb2411e3b4f72787-merged.mount: Deactivated successfully.
Dec 06 09:40:49 compute-0 podman[87144]: 2025-12-06 09:40:49.023716503 +0000 UTC m=+0.693667795 container remove be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0 (image=quay.io/ceph/ceph:v19, name=eager_fermat, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:40:49 compute-0 systemd[1]: libpod-conmon-be6ccb1374972c0bd916e4b904d4c2f8fe4a7e3174f860e3a807b76a807283a0.scope: Deactivated successfully.
Dec 06 09:40:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]': finished
Dec 06 09:40:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Dec 06 09:40:49 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Dec 06 09:40:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:40:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 09:40:49 compute-0 sudo[87098]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:49 compute-0 charming_benz[87226]: {
Dec 06 09:40:49 compute-0 charming_benz[87226]:     "1": [
Dec 06 09:40:49 compute-0 charming_benz[87226]:         {
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "devices": [
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "/dev/loop3"
Dec 06 09:40:49 compute-0 charming_benz[87226]:             ],
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "lv_name": "ceph_lv0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "lv_size": "21470642176",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "name": "ceph_lv0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "tags": {
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.cluster_name": "ceph",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.crush_device_class": "",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.encrypted": "0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.osd_id": "1",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.type": "block",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.vdo": "0",
Dec 06 09:40:49 compute-0 charming_benz[87226]:                 "ceph.with_tpm": "0"
Dec 06 09:40:49 compute-0 charming_benz[87226]:             },
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "type": "block",
Dec 06 09:40:49 compute-0 charming_benz[87226]:             "vg_name": "ceph_vg0"
Dec 06 09:40:49 compute-0 charming_benz[87226]:         }
Dec 06 09:40:49 compute-0 charming_benz[87226]:     ]
Dec 06 09:40:49 compute-0 charming_benz[87226]: }
Dec 06 09:40:49 compute-0 systemd[1]: libpod-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope: Deactivated successfully.
Dec 06 09:40:49 compute-0 conmon[87226]: conmon 7ba1f5b9a5716b99d010 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope/container/memory.events
Dec 06 09:40:49 compute-0 podman[87209]: 2025-12-06 09:40:49.085325171 +0000 UTC m=+0.457549678 container died 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1fe347a38eedce6b8bbacdcf30afa6d1c2fd53c8c4858bdbb7629b77ff589c-merged.mount: Deactivated successfully.
Dec 06 09:40:49 compute-0 podman[87209]: 2025-12-06 09:40:49.137245735 +0000 UTC m=+0.509470232 container remove 7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:49 compute-0 systemd[1]: libpod-conmon-7ba1f5b9a5716b99d0103c0fa224b2a0da9a3a35eb82ddc57d6e085d877cf7d2.scope: Deactivated successfully.
Dec 06 09:40:49 compute-0 sudo[87043]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:49 compute-0 sudo[87261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:40:49 compute-0 sudo[87261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:49 compute-0 sudo[87261]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:49 compute-0 sudo[87286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:40:49 compute-0 sudo[87286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:40:49 compute-0 sudo[87334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzkvrpubdqqzggurbigcbtnsucraceqq ; /usr/bin/python3'
Dec 06 09:40:49 compute-0 sudo[87334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:49 compute-0 python3[87336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:49 compute-0 podman[87361]: 2025-12-06 09:40:49.66194871 +0000 UTC m=+0.048436391 container create c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 09:40:49 compute-0 systemd[1]: Started libpod-conmon-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope.
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.731140614 +0000 UTC m=+0.041861749 container create 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 09:40:49 compute-0 podman[87361]: 2025-12-06 09:40:49.637379403 +0000 UTC m=+0.023867114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:49 compute-0 systemd[1]: Started libpod-conmon-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope.
Dec 06 09:40:49 compute-0 podman[87361]: 2025-12-06 09:40:49.754820701 +0000 UTC m=+0.141308482 container init c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:49 compute-0 podman[87361]: 2025-12-06 09:40:49.765100165 +0000 UTC m=+0.151587836 container start c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:40:49 compute-0 podman[87361]: 2025-12-06 09:40:49.768371131 +0000 UTC m=+0.154858892 container attach c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.80133232 +0000 UTC m=+0.112053455 container init 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.808323956 +0000 UTC m=+0.119045101 container start 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.71346193 +0000 UTC m=+0.024183115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:49 compute-0 practical_diffie[87408]: 167 167
Dec 06 09:40:49 compute-0 systemd[1]: libpod-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope: Deactivated successfully.
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.812000095 +0000 UTC m=+0.122721230 container attach 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.812333907 +0000 UTC m=+0.123055042 container died 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:49 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 06 09:40:49 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 06 09:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f74f7ed9a237a607cd5619984ddff9477199a302b090a6901c8fcbc69ba89fa-merged.mount: Deactivated successfully.
Dec 06 09:40:49 compute-0 podman[87388]: 2025-12-06 09:40:49.860248531 +0000 UTC m=+0.170969676 container remove 54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_diffie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:49 compute-0 systemd[1]: libpod-conmon-54afaa7b56120399f3dc7611ccd07475a1dd9a74536020d48a49e61a2622329f.scope: Deactivated successfully.
Dec 06 09:40:49 compute-0 ceph-mon[74327]: 5.0 scrub starts
Dec 06 09:40:49 compute-0 ceph-mon[74327]: 5.0 scrub ok
Dec 06 09:40:49 compute-0 ceph-mon[74327]: 4.5 scrub starts
Dec 06 09:40:49 compute-0 ceph-mon[74327]: 4.5 scrub ok
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='client.14247 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mon[74327]: Saving service node-exporter spec with placement *
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:49 compute-0 ceph-mon[74327]: Saving service grafana spec with placement compute-0;count:1
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:49 compute-0 ceph-mon[74327]: Saving service prometheus spec with placement compute-0;count:1
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:49 compute-0 ceph-mon[74327]: Saving service alertmanager spec with placement compute-0;count:1
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/569971095' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b46cc65b-25ba-490a-8b8e-91e4407f3aed"}]': finished
Dec 06 09:40:49 compute-0 ceph-mon[74327]: osdmap e31: 3 total, 2 up, 3 in
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:40:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3771187413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 09:40:50 compute-0 podman[87452]: 2025-12-06 09:40:50.054876851 +0000 UTC m=+0.063177079 container create 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:50 compute-0 systemd[1]: Started libpod-conmon-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope.
Dec 06 09:40:50 compute-0 podman[87452]: 2025-12-06 09:40:50.024224467 +0000 UTC m=+0.032524765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:40:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec 06 09:40:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4267326554' entity='client.admin' 
Dec 06 09:40:50 compute-0 podman[87452]: 2025-12-06 09:40:50.160184657 +0000 UTC m=+0.168484915 container init 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:50 compute-0 systemd[1]: libpod-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope: Deactivated successfully.
Dec 06 09:40:50 compute-0 podman[87452]: 2025-12-06 09:40:50.170596264 +0000 UTC m=+0.178896462 container start 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 09:40:50 compute-0 podman[87452]: 2025-12-06 09:40:50.231514579 +0000 UTC m=+0.239814817 container attach 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:40:50 compute-0 podman[87475]: 2025-12-06 09:40:50.244828971 +0000 UTC m=+0.054111665 container died c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:40:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-85ae777c3574167fc76f03953c408c010e8dac3f6c1cf37d0b61d031bb73e337-merged.mount: Deactivated successfully.
Dec 06 09:40:50 compute-0 podman[87475]: 2025-12-06 09:40:50.642637371 +0000 UTC m=+0.451920065 container remove c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6 (image=quay.io/ceph/ceph:v19, name=nostalgic_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:40:50 compute-0 systemd[1]: libpod-conmon-c9fec6490bd962c8ab6c887a2facd978b7656dc8393cf65edaf6972eec4dc9c6.scope: Deactivated successfully.
Dec 06 09:40:50 compute-0 sudo[87334]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 06 09:40:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 06 09:40:50 compute-0 sudo[87580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclfjylnqflpnmppanffggjruntiqkim ; /usr/bin/python3'
Dec 06 09:40:50 compute-0 sudo[87580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:50 compute-0 lvm[87587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:40:50 compute-0 lvm[87587]: VG ceph_vg0 finished
Dec 06 09:40:50 compute-0 python3[87582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:51 compute-0 nostalgic_perlman[87469]: {}
Dec 06 09:40:51 compute-0 systemd[1]: libpod-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Deactivated successfully.
Dec 06 09:40:51 compute-0 systemd[1]: libpod-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Consumed 1.485s CPU time.
Dec 06 09:40:51 compute-0 podman[87452]: 2025-12-06 09:40:51.075044813 +0000 UTC m=+1.083345031 container died 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.071011482 +0000 UTC m=+0.048087550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:51 compute-0 ceph-mon[74327]: 3.6 scrub starts
Dec 06 09:40:51 compute-0 ceph-mon[74327]: 3.6 scrub ok
Dec 06 09:40:51 compute-0 ceph-mon[74327]: 5.4 scrub starts
Dec 06 09:40:51 compute-0 ceph-mon[74327]: 5.4 scrub ok
Dec 06 09:40:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4267326554' entity='client.admin' 
Dec 06 09:40:51 compute-0 ceph-mon[74327]: pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4de0713399881a5c2e9c24687669b0b208121b43f00a698331381c0ef6e65adb-merged.mount: Deactivated successfully.
Dec 06 09:40:51 compute-0 podman[87452]: 2025-12-06 09:40:51.418765529 +0000 UTC m=+1.427065737 container remove 73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.42589563 +0000 UTC m=+0.402971658 container create 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:51 compute-0 systemd[1]: libpod-conmon-73ff1e8d16f26475ab282f75d5e98c26ccaad0d6e1eed18778470d3de9cc8334.scope: Deactivated successfully.
Dec 06 09:40:51 compute-0 sudo[87286]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:51 compute-0 systemd[1]: Started libpod-conmon-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope.
Dec 06 09:40:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:40:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:40:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.520276581 +0000 UTC m=+0.497352689 container init 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.52826236 +0000 UTC m=+0.505338388 container start 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.531860666 +0000 UTC m=+0.508936784 container attach 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:40:51 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Dec 06 09:40:51 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Dec 06 09:40:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec 06 09:40:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/821839877' entity='client.admin' 
Dec 06 09:40:51 compute-0 systemd[1]: libpod-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope: Deactivated successfully.
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.938725091 +0000 UTC m=+0.915801189 container died 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6955734231a935083861f1c21955730e3bcc9f63f3eb934bacfde6f1cae7a2-merged.mount: Deactivated successfully.
Dec 06 09:40:51 compute-0 podman[87589]: 2025-12-06 09:40:51.992135242 +0000 UTC m=+0.969211310 container remove 8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204 (image=quay.io/ceph/ceph:v19, name=charming_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:52 compute-0 systemd[1]: libpod-conmon-8fa81e4c333b640387055734be7d46d8cfc2c274c684055b31591acb26ee2204.scope: Deactivated successfully.
Dec 06 09:40:52 compute-0 sudo[87580]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:52 compute-0 sudo[87680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejenluoyajwqhlkbtuyeiuscvfelmyb ; /usr/bin/python3'
Dec 06 09:40:52 compute-0 sudo[87680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:52 compute-0 python3[87682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:52 compute-0 podman[87683]: 2025-12-06 09:40:52.428325746 +0000 UTC m=+0.048694559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:52 compute-0 podman[87683]: 2025-12-06 09:40:52.745682609 +0000 UTC m=+0.366051422 container create 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 5.3 deep-scrub starts
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 5.3 deep-scrub ok
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 5.2 scrub starts
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 5.2 scrub ok
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 4.2 scrub starts
Dec 06 09:40:52 compute-0 ceph-mon[74327]: 4.2 scrub ok
Dec 06 09:40:52 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:52 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/821839877' entity='client.admin' 
Dec 06 09:40:52 compute-0 systemd[1]: Started libpod-conmon-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope.
Dec 06 09:40:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:52 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 06 09:40:52 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 06 09:40:53 compute-0 podman[87683]: 2025-12-06 09:40:53.238914192 +0000 UTC m=+0.859283085 container init 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:40:53 compute-0 podman[87683]: 2025-12-06 09:40:53.247504221 +0000 UTC m=+0.867873044 container start 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:53 compute-0 podman[87683]: 2025-12-06 09:40:53.25271257 +0000 UTC m=+0.873081363 container attach 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:40:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec 06 09:40:53 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 06 09:40:53 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 06 09:40:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1482144347' entity='client.admin' 
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 5.7 deep-scrub starts
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 5.7 deep-scrub ok
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 4.6 deep-scrub starts
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 4.6 deep-scrub ok
Dec 06 09:40:53 compute-0 ceph-mon[74327]: pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 5.1 scrub starts
Dec 06 09:40:53 compute-0 ceph-mon[74327]: 5.1 scrub ok
Dec 06 09:40:53 compute-0 systemd[1]: libpod-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope: Deactivated successfully.
Dec 06 09:40:53 compute-0 podman[87683]: 2025-12-06 09:40:53.936849284 +0000 UTC m=+1.557218067 container died 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f5f50b60b3848cac3f0aecb3510487d465ed547958780694d0350652363be10-merged.mount: Deactivated successfully.
Dec 06 09:40:53 compute-0 podman[87683]: 2025-12-06 09:40:53.974620659 +0000 UTC m=+1.594989442 container remove 112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e (image=quay.io/ceph/ceph:v19, name=sleepy_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:53 compute-0 systemd[1]: libpod-conmon-112ef3b4fa01409911e56c2cbba91f64d1c97c4139677cefa63d004aa2735e3e.scope: Deactivated successfully.
Dec 06 09:40:53 compute-0 sudo[87680]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:54 compute-0 sudo[87756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glgukpqmsciydqmjmpyhrwjmftpefevq ; /usr/bin/python3'
Dec 06 09:40:54 compute-0 sudo[87756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v110: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:54 compute-0 python3[87758]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:54 compute-0 sudo[87756]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 06 09:40:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 06 09:40:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:40:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:54 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 06 09:40:54 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 06 09:40:54 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 06 09:40:54 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 06 09:40:54 compute-0 sudo[87794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxbfwhclsppadbxodepykqtueukqfus ; /usr/bin/python3'
Dec 06 09:40:54 compute-0 sudo[87794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:55 compute-0 python3[87796]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.qhdjwa/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:40:55 compute-0 ceph-mon[74327]: 3.2 scrub starts
Dec 06 09:40:55 compute-0 ceph-mon[74327]: 3.2 scrub ok
Dec 06 09:40:55 compute-0 ceph-mon[74327]: 4.e scrub starts
Dec 06 09:40:55 compute-0 ceph-mon[74327]: 4.e scrub ok
Dec 06 09:40:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1482144347' entity='client.admin' 
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.283912817 +0000 UTC m=+0.102570318 container create 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:40:55 compute-0 ceph-mon[74327]: pgmap v110: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:55 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 06 09:40:55 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.211792608 +0000 UTC m=+0.030450089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:55 compute-0 systemd[1]: Started libpod-conmon-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope.
Dec 06 09:40:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.434703286 +0000 UTC m=+0.253360797 container init 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.441112824 +0000 UTC m=+0.259770325 container start 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.444729311 +0000 UTC m=+0.263386792 container attach 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:40:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.qhdjwa/server_addr}] v 0)
Dec 06 09:40:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3512142115' entity='client.admin' 
Dec 06 09:40:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 06 09:40:55 compute-0 systemd[1]: libpod-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope: Deactivated successfully.
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.840169264 +0000 UTC m=+0.658826725 container died 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:40:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 06 09:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-97926f6dea5f8007d17b8b0b8743f5fc27c35743e12e39b52a22d2713c12445d-merged.mount: Deactivated successfully.
Dec 06 09:40:55 compute-0 podman[87797]: 2025-12-06 09:40:55.875949285 +0000 UTC m=+0.694606746 container remove 867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980 (image=quay.io/ceph/ceph:v19, name=sharp_fermat, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:55 compute-0 systemd[1]: libpod-conmon-867626a09d8b86d039fac22e1ecfe0126fc68d73541ed528c2bb1932555b4980.scope: Deactivated successfully.
Dec 06 09:40:55 compute-0 sudo[87794]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 5.5 scrub starts
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 5.5 scrub ok
Dec 06 09:40:56 compute-0 ceph-mon[74327]: Deploying daemon osd.2 on compute-2
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 3.9 scrub starts
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 3.9 scrub ok
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 4.4 scrub starts
Dec 06 09:40:56 compute-0 ceph-mon[74327]: 4.4 scrub ok
Dec 06 09:40:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3512142115' entity='client.admin' 
Dec 06 09:40:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 06 09:40:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 06 09:40:56 compute-0 sudo[87871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplzyecyeixoyzofxdkeglsigaooyxxg ; /usr/bin/python3'
Dec 06 09:40:56 compute-0 sudo[87871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:57 compute-0 python3[87873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.sauzid/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.227860004 +0000 UTC m=+0.071479529 container create 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:40:57 compute-0 systemd[1]: Started libpod-conmon-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope.
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.196583969 +0000 UTC m=+0.040203544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.358321334 +0000 UTC m=+0.201940849 container init 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.369042862 +0000 UTC m=+0.212662347 container start 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.372800804 +0000 UTC m=+0.216420319 container attach 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:40:57 compute-0 ceph-mon[74327]: 4.1 scrub starts
Dec 06 09:40:57 compute-0 ceph-mon[74327]: 4.1 scrub ok
Dec 06 09:40:57 compute-0 ceph-mon[74327]: 4.3 scrub starts
Dec 06 09:40:57 compute-0 ceph-mon[74327]: 4.3 scrub ok
Dec 06 09:40:57 compute-0 ceph-mon[74327]: pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.sauzid/server_addr}] v 0)
Dec 06 09:40:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2451230512' entity='client.admin' 
Dec 06 09:40:57 compute-0 systemd[1]: libpod-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope: Deactivated successfully.
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.798788597 +0000 UTC m=+0.642408122 container died 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:40:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 06 09:40:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 06 09:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ee08a90c562f9ffe70abbe78e86b997eebdda46832e62a0cde19dc0348a691b-merged.mount: Deactivated successfully.
Dec 06 09:40:57 compute-0 podman[87874]: 2025-12-06 09:40:57.866488374 +0000 UTC m=+0.710107859 container remove 8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42 (image=quay.io/ceph/ceph:v19, name=romantic_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:40:57 compute-0 systemd[1]: libpod-conmon-8e32a17ea8e55dcd3883dc2441e5004ebf33d099f6575fc09d837bfb1b117b42.scope: Deactivated successfully.
Dec 06 09:40:57 compute-0 sudo[87871]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:58 compute-0 ceph-mon[74327]: 3.5 scrub starts
Dec 06 09:40:58 compute-0 ceph-mon[74327]: 3.5 scrub ok
Dec 06 09:40:58 compute-0 ceph-mon[74327]: 3.1 scrub starts
Dec 06 09:40:58 compute-0 ceph-mon[74327]: 3.1 scrub ok
Dec 06 09:40:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2451230512' entity='client.admin' 
Dec 06 09:40:58 compute-0 sudo[87949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgfpxymwosyxlwtvyggxgzinnphzdjc ; /usr/bin/python3'
Dec 06 09:40:58 compute-0 sudo[87949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:58 compute-0 python3[87951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.oazbvn/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:58 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 06 09:40:58 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 06 09:40:58 compute-0 podman[87952]: 2025-12-06 09:40:58.832091978 +0000 UTC m=+0.059509282 container create 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:40:58 compute-0 systemd[1]: Started libpod-conmon-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope.
Dec 06 09:40:58 compute-0 podman[87952]: 2025-12-06 09:40:58.810516756 +0000 UTC m=+0.037934100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:58 compute-0 podman[87952]: 2025-12-06 09:40:58.92391401 +0000 UTC m=+0.151331334 container init 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:40:58 compute-0 podman[87952]: 2025-12-06 09:40:58.931545712 +0000 UTC m=+0.158963036 container start 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:58 compute-0 podman[87952]: 2025-12-06 09:40:58.936077464 +0000 UTC m=+0.163494788 container attach 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:40:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.oazbvn/server_addr}] v 0)
Dec 06 09:40:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:40:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2111286861' entity='client.admin' 
Dec 06 09:40:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:59 compute-0 systemd[1]: libpod-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope: Deactivated successfully.
Dec 06 09:40:59 compute-0 podman[87952]: 2025-12-06 09:40:59.392989617 +0000 UTC m=+0.620406911 container died 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 09:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11acf241a7c34b80171333b6f1b5d0efd14d54885d7c12864545974d3d975ef-merged.mount: Deactivated successfully.
Dec 06 09:40:59 compute-0 podman[87952]: 2025-12-06 09:40:59.432521897 +0000 UTC m=+0.659939181 container remove 2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08 (image=quay.io/ceph/ceph:v19, name=inspiring_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:59 compute-0 systemd[1]: libpod-conmon-2f0f245fd88209f84678d2f4a2f66dac440a740ccd701819337aa7a5c4b24a08.scope: Deactivated successfully.
Dec 06 09:40:59 compute-0 sudo[87949]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:59 compute-0 ceph-mon[74327]: 4.c scrub starts
Dec 06 09:40:59 compute-0 ceph-mon[74327]: 4.c scrub ok
Dec 06 09:40:59 compute-0 ceph-mon[74327]: 3.8 scrub starts
Dec 06 09:40:59 compute-0 ceph-mon[74327]: 3.8 scrub ok
Dec 06 09:40:59 compute-0 ceph-mon[74327]: pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:40:59 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2111286861' entity='client.admin' 
Dec 06 09:40:59 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:40:59 compute-0 sudo[88027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznzssvhwjaxgvaacdlqqicskqvcomjd ; /usr/bin/python3'
Dec 06 09:40:59 compute-0 sudo[88027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:59 compute-0 python3[88029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:40:59 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 06 09:40:59 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 06 09:40:59 compute-0 podman[88030]: 2025-12-06 09:40:59.88441484 +0000 UTC m=+0.065975207 container create 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:40:59 compute-0 systemd[1]: Started libpod-conmon-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope.
Dec 06 09:40:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:40:59 compute-0 podman[88030]: 2025-12-06 09:40:59.857889151 +0000 UTC m=+0.039449598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:40:59 compute-0 podman[88030]: 2025-12-06 09:40:59.98310536 +0000 UTC m=+0.164665797 container init 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:40:59 compute-0 podman[88030]: 2025-12-06 09:40:59.992445634 +0000 UTC m=+0.174006031 container start 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:40:59 compute-0 podman[88030]: 2025-12-06 09:40:59.996311927 +0000 UTC m=+0.177872334 container attach 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 06 09:41:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 06 09:41:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:00 compute-0 ceph-mon[74327]: 5.e scrub starts
Dec 06 09:41:00 compute-0 ceph-mon[74327]: 5.e scrub ok
Dec 06 09:41:00 compute-0 ceph-mon[74327]: 4.1d scrub starts
Dec 06 09:41:00 compute-0 ceph-mon[74327]: 4.1d scrub ok
Dec 06 09:41:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 06 09:41:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 06 09:41:00 compute-0 beautiful_yalow[88045]: module 'dashboard' is already disabled
Dec 06 09:41:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:00 compute-0 systemd[1]: libpod-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope: Deactivated successfully.
Dec 06 09:41:00 compute-0 podman[88030]: 2025-12-06 09:41:00.679096539 +0000 UTC m=+0.860656936 container died 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e16728378dbea3f116fd77d129b87a4100662921e92d7eac01ac2d07aeb411e-merged.mount: Deactivated successfully.
Dec 06 09:41:00 compute-0 podman[88030]: 2025-12-06 09:41:00.73102867 +0000 UTC m=+0.912589027 container remove 64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c (image=quay.io/ceph/ceph:v19, name=beautiful_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:00 compute-0 systemd[1]: libpod-conmon-64f0b33176a2cfa2ed70329411addd5ec8a80a6daf1a8276259b2975a83e201c.scope: Deactivated successfully.
Dec 06 09:41:00 compute-0 sudo[88027]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:00 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Dec 06 09:41:00 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Dec 06 09:41:00 compute-0 sudo[88103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxsgntqtwrmzakjlhesudkjjpimjqjen ; /usr/bin/python3'
Dec 06 09:41:00 compute-0 sudo[88103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:01 compute-0 python3[88105]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:01 compute-0 podman[88106]: 2025-12-06 09:41:01.184591817 +0000 UTC m=+0.055343391 container create 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:01 compute-0 systemd[1]: Started libpod-conmon-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope.
Dec 06 09:41:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:01 compute-0 podman[88106]: 2025-12-06 09:41:01.160147154 +0000 UTC m=+0.030898738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:01 compute-0 podman[88106]: 2025-12-06 09:41:01.279031792 +0000 UTC m=+0.149783396 container init 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:01 compute-0 podman[88106]: 2025-12-06 09:41:01.285267219 +0000 UTC m=+0.156018813 container start 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:41:01 compute-0 podman[88106]: 2025-12-06 09:41:01.288834702 +0000 UTC m=+0.159586366 container attach 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:41:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:01 compute-0 sudo[88125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:41:01 compute-0 sudo[88125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:01 compute-0 sudo[88125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:01 compute-0 ceph-mon[74327]: 5.1b scrub starts
Dec 06 09:41:01 compute-0 ceph-mon[74327]: 5.1b scrub ok
Dec 06 09:41:01 compute-0 ceph-mon[74327]: 4.1c scrub starts
Dec 06 09:41:01 compute-0 ceph-mon[74327]: 4.1c scrub ok
Dec 06 09:41:01 compute-0 ceph-mon[74327]: pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:01 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2854219236' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 06 09:41:01 compute-0 ceph-mon[74327]: mgrmap e12: compute-0.qhdjwa(active, since 2m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:01 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:01 compute-0 ceph-mon[74327]: from='mgr.14122 192.168.122.100:0/2031792771' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:01 compute-0 sudo[88169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 06 09:41:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 06 09:41:01 compute-0 sudo[88169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:01 compute-0 sudo[88169]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:01 compute-0 sudo[88195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:41:01 compute-0 sudo[88195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:01 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 06 09:41:01 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 06 09:41:02 compute-0 sudo[88195]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:02 compute-0 ceph-mon[74327]: 4.1a deep-scrub starts
Dec 06 09:41:02 compute-0 ceph-mon[74327]: 4.1a deep-scrub ok
Dec 06 09:41:02 compute-0 ceph-mon[74327]: 5.1d scrub starts
Dec 06 09:41:02 compute-0 ceph-mon[74327]: 5.1d scrub ok
Dec 06 09:41:02 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 06 09:41:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 09:41:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 06 09:41:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 09:41:02 compute-0 systemd[1]: libpod-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 sshd-session[75961]: Connection closed by 192.168.122.100 port 38686
Dec 06 09:41:02 compute-0 sshd-session[75905]: Connection closed by 192.168.122.100 port 48006
Dec 06 09:41:02 compute-0 sshd-session[75760]: Connection closed by 192.168.122.100 port 47944
Dec 06 09:41:02 compute-0 sshd-session[75932]: Connection closed by 192.168.122.100 port 38678
Dec 06 09:41:02 compute-0 sshd-session[75876]: Connection closed by 192.168.122.100 port 47992
Dec 06 09:41:02 compute-0 sshd-session[75847]: Connection closed by 192.168.122.100 port 47976
Dec 06 09:41:02 compute-0 sshd-session[75818]: Connection closed by 192.168.122.100 port 47974
Dec 06 09:41:02 compute-0 sshd-session[75789]: Connection closed by 192.168.122.100 port 47960
Dec 06 09:41:02 compute-0 sshd-session[75958]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75702]: Connection closed by 192.168.122.100 port 47924
Dec 06 09:41:02 compute-0 sshd-session[75673]: Connection closed by 192.168.122.100 port 47912
Dec 06 09:41:02 compute-0 sshd-session[75672]: Connection closed by 192.168.122.100 port 47908
Dec 06 09:41:02 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 sshd-session[75731]: Connection closed by 192.168.122.100 port 47940
Dec 06 09:41:02 compute-0 systemd[1]: session-33.scope: Consumed 25.110s CPU time.
Dec 06 09:41:02 compute-0 sshd-session[75757]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75929]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75873]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec 06 09:41:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec 06 09:41:02 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 33 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 32 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 podman[88253]: 2025-12-06 09:41:02.869298088 +0000 UTC m=+0.081109295 container died 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 sshd-session[75666]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75699]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:41:02 compute-0 sshd-session[75902]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75815]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 sshd-session[75649]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 33.
Dec 06 09:41:02 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 sshd-session[75786]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Dec 06 09:41:02 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 31 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 21 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 sshd-session[75844]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 32.
Dec 06 09:41:02 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Dec 06 09:41:02 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 26.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 sshd-session[75728]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 30.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 23.
Dec 06 09:41:02 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 24.
Dec 06 09:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec6d0a28d9d1c91c266aeae4923e29123b3074d756b82bb6d83d21607f441dd-merged.mount: Deactivated successfully.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 21.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 28.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 31.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 27.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 29.
Dec 06 09:41:02 compute-0 systemd-logind[795]: Removed session 25.
Dec 06 09:41:02 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:41:02 compute-0 podman[88253]: 2025-12-06 09:41:02.92441868 +0000 UTC m=+0.136229827 container remove 2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f (image=quay.io/ceph/ceph:v19, name=brave_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:41:02 compute-0 systemd[1]: libpod-conmon-2e63e8aa1610e8befa4a69cacba4896edcad0ee21b45b3d2042a9fc75fc2596f.scope: Deactivated successfully.
Dec 06 09:41:02 compute-0 sudo[88103]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.034+0000 7fe91853c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:41:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.114+0000 7fe91853c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:41:03 compute-0 sudo[88309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amesdkaxhlbkdfeemkoxpforqgvopize ; /usr/bin/python3'
Dec 06 09:41:03 compute-0 sudo[88309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:03 compute-0 python3[88311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:03 compute-0 podman[88312]: 2025-12-06 09:41:03.499397834 +0000 UTC m=+0.081255729 container create 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:03 compute-0 systemd[1]: Started libpod-conmon-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope.
Dec 06 09:41:03 compute-0 podman[88312]: 2025-12-06 09:41:03.456415416 +0000 UTC m=+0.038273351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:03 compute-0 podman[88312]: 2025-12-06 09:41:03.585713833 +0000 UTC m=+0.167571748 container init 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:03 compute-0 podman[88312]: 2025-12-06 09:41:03.598761185 +0000 UTC m=+0.180619090 container start 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:41:03 compute-0 podman[88312]: 2025-12-06 09:41:03.603281018 +0000 UTC m=+0.185138913 container attach 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:41:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 4.1b scrub starts
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 4.1b scrub ok
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 3.1b scrub starts
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 3.1b scrub ok
Dec 06 09:41:03 compute-0 ceph-mon[74327]: pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:03 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2146703949' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 06 09:41:03 compute-0 ceph-mon[74327]: mgrmap e13: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:03 compute-0 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 09:41:03 compute-0 ceph-mon[74327]: from='osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 5.f deep-scrub starts
Dec 06 09:41:03 compute-0 ceph-mon[74327]: 5.f deep-scrub ok
Dec 06 09:41:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 06 09:41:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Dec 06 09:41:03 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Dec 06 09:41:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec 06 09:41:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 09:41:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:41:03 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 06 09:41:03 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 06 09:41:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:03.927+0000 7fe91853c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:03 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.527+0000 7fe91853c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.702+0000 7fe91853c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:41:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 06 09:41:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 06 09:41:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Dec 06 09:41:04 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.392019272s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481727600s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.392019272s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.088809967s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178733826s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.082242012s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.172164917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.088809967s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.082242012s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.604486465s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.694526672s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.604486465s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.391054153s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481193542s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.391054153s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087738991s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177993774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087738991s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390565872s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390565872s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087518692s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177909851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087518692s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087523460s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.177986145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087523460s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087460518s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178039551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087460518s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390346527s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.481040955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390346527s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087246895s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178054810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390001297s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.390001297s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087246895s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087553978s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178504944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087553978s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087446213s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178489685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087446213s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087342262s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178512573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087305069s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178497314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087342262s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087305069s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087290764s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178657532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087290764s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383893967s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.475387573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383893967s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087006569s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 98.178642273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=11.087006569s) [] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383605957s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 101.475349426s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:04 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 33 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=14.383605957s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:04 compute-0 ceph-mon[74327]: 4.19 deep-scrub starts
Dec 06 09:41:04 compute-0 ceph-mon[74327]: 4.19 deep-scrub ok
Dec 06 09:41:04 compute-0 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 06 09:41:04 compute-0 ceph-mon[74327]: osdmap e32: 3 total, 2 up, 3 in
Dec 06 09:41:04 compute-0 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 09:41:04 compute-0 ceph-mon[74327]: from='osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 09:41:04 compute-0 ceph-mon[74327]: 3.1c scrub starts
Dec 06 09:41:04 compute-0 ceph-mon[74327]: 3.1c scrub ok
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.783+0000 7fe91853c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:41:04 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:41:04 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 06 09:41:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:04.939+0000 7fe91853c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:04 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:41:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:41:05 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 06 09:41:05 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 06 09:41:05 compute-0 ceph-mon[74327]: purged_snaps scrub starts
Dec 06 09:41:05 compute-0 ceph-mon[74327]: purged_snaps scrub ok
Dec 06 09:41:05 compute-0 ceph-mon[74327]: 3.1f scrub starts
Dec 06 09:41:05 compute-0 ceph-mon[74327]: 3.1f scrub ok
Dec 06 09:41:05 compute-0 ceph-mon[74327]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 06 09:41:05 compute-0 ceph-mon[74327]: osdmap e33: 3 total, 2 up, 3 in
Dec 06 09:41:05 compute-0 ceph-mon[74327]: 5.18 scrub starts
Dec 06 09:41:05 compute-0 ceph-mon[74327]: 5.18 scrub ok
Dec 06 09:41:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:05.982+0000 7fe91853c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:05 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.236+0000 7fe91853c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.314+0000 7fe91853c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.386+0000 7fe91853c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.469+0000 7fe91853c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.538+0000 7fe91853c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:41:06 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 06 09:41:06 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.885+0000 7fe91853c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:41:06 compute-0 ceph-mon[74327]: 3.1e deep-scrub starts
Dec 06 09:41:06 compute-0 ceph-mon[74327]: 3.1e deep-scrub ok
Dec 06 09:41:06 compute-0 ceph-mon[74327]: 4.18 scrub starts
Dec 06 09:41:06 compute-0 ceph-mon[74327]: 4.18 scrub ok
Dec 06 09:41:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:06.986+0000 7fe91853c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:06 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:41:07 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:41:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:07.403+0000 7fe91853c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:07 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:07 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:41:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 06 09:41:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 06 09:41:07 compute-0 ceph-mon[74327]: 4.f scrub starts
Dec 06 09:41:07 compute-0 ceph-mon[74327]: 4.f scrub ok
Dec 06 09:41:07 compute-0 ceph-mon[74327]: 3.3 scrub starts
Dec 06 09:41:07 compute-0 ceph-mon[74327]: 3.3 scrub ok
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.023+0000 7fe91853c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.097+0000 7fe91853c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.174+0000 7fe91853c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.317+0000 7fe91853c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.382+0000 7fe91853c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.533+0000 7fe91853c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:41:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:08.747+0000 7fe91853c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:08 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:41:08 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:08 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:08 compute-0 ceph-mon[74327]: 5.19 scrub starts
Dec 06 09:41:08 compute-0 ceph-mon[74327]: 5.19 scrub ok
Dec 06 09:41:08 compute-0 ceph-mon[74327]: 5.1c scrub starts
Dec 06 09:41:08 compute-0 ceph-mon[74327]: 5.1c scrub ok
Dec 06 09:41:08 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:08 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:09.010+0000 7fe91853c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:41:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:09.082+0000 7fe91853c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x55b59c857860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.qhdjwa(active, starting, since 0.0389693s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load_all_metadata Skipping incomplete metadata entry
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [balancer INFO root] Starting
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:41:09
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [progress INFO root] Loading...
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fe8958ae1c0>, <progress.module.GhostEvent object at 0x7fe8958ae1f0>, <progress.module.GhostEvent object at 0x7fe8958ae220>, <progress.module.GhostEvent object at 0x7fe8958ae250>, <progress.module.GhostEvent object at 0x7fe8958ae280>, <progress.module.GhostEvent object at 0x7fe8958ae2b0>, <progress.module.GhostEvent object at 0x7fe8958ae2e0>, <progress.module.GhostEvent object at 0x7fe8958ae310>, <progress.module.GhostEvent object at 0x7fe8958ae340>, <progress.module.GhostEvent object at 0x7fe8958ae370>] historic events
Dec 06 09:41:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec 06 09:41:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 06 09:41:09 compute-0 sshd-session[88475]: Accepted publickey for ceph-admin from 192.168.122.100 port 56302 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:41:09 compute-0 systemd-logind[795]: New session 34 of user ceph-admin.
Dec 06 09:41:09 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 06 09:41:09 compute-0 sshd-session[88475]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:41:09 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec 06 09:41:09 compute-0 sudo[88491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:09 compute-0 sudo[88491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:09 compute-0 sudo[88491]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:10 compute-0 ceph-mon[74327]: 2.19 scrub starts
Dec 06 09:41:10 compute-0 ceph-mon[74327]: 2.19 scrub ok
Dec 06 09:41:10 compute-0 ceph-mon[74327]: mgrmap e14: compute-0.qhdjwa(active, since 3m), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:10 compute-0 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:10 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:10 compute-0 ceph-mon[74327]: osdmap e34: 3 total, 2 up, 3 in
Dec 06 09:41:10 compute-0 ceph-mon[74327]: mgrmap e15: compute-0.qhdjwa(active, starting, since 0.0389693s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec 06 09:41:10 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:41:10 compute-0 sudo[88516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:41:10 compute-0 sudo[88516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.qhdjwa(active, since 1.10991s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:10 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec 06 09:41:10 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/709563040; not ready for session (expect reconnect)
Dec 06 09:41:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:41:10 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:10 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 09:41:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec 06 09:41:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:10 compute-0 fervent_hugle[88339]: Option GRAFANA_API_USERNAME updated
Dec 06 09:41:10 compute-0 systemd[1]: libpod-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope: Deactivated successfully.
Dec 06 09:41:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:10 compute-0 podman[88549]: 2025-12-06 09:41:10.313395405 +0000 UTC m=+0.028701399 container died 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-041babab4756ad2ecd8fdafb3c956d7adc98cb75526efa0bee1013881aca8318-merged.mount: Deactivated successfully.
Dec 06 09:41:10 compute-0 podman[88549]: 2025-12-06 09:41:10.36637539 +0000 UTC m=+0.081681404 container remove 7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6 (image=quay.io/ceph/ceph:v19, name=fervent_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:41:10 compute-0 systemd[1]: libpod-conmon-7e5a9574a48f9b5450975b8c693e080208c921208400ea160220de229a3fdaa6.scope: Deactivated successfully.
Dec 06 09:41:10 compute-0 sudo[88309]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:10 compute-0 sudo[88634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfxaijsoconpyseazqioivlvwlyopju ; /usr/bin/python3'
Dec 06 09:41:10 compute-0 sudo[88634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:10 compute-0 podman[88650]: 2025-12-06 09:41:10.664011647 +0000 UTC m=+0.051338184 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:41:10 compute-0 python3[88636]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec 06 09:41:10 compute-0 podman[88670]: 2025-12-06 09:41:10.739128602 +0000 UTC m=+0.040441520 container create 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:41:10 compute-0 podman[88650]: 2025-12-06 09:41:10.768852251 +0000 UTC m=+0.156178758 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:10 compute-0 systemd[1]: Started libpod-conmon-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope.
Dec 06 09:41:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:10 compute-0 podman[88670]: 2025-12-06 09:41:10.719819281 +0000 UTC m=+0.021132219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:10 compute-0 podman[88670]: 2025-12-06 09:41:10.822019801 +0000 UTC m=+0.123332759 container init 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:41:10 compute-0 podman[88670]: 2025-12-06 09:41:10.831666427 +0000 UTC m=+0.132979335 container start 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:10 compute-0 podman[88670]: 2025-12-06 09:41:10.835533799 +0000 UTC m=+0.136846777 container attach 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:11 compute-0 sudo[88516]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14337 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/709563040; not ready for session (expect reconnect)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mgrmap e16: compute-0.qhdjwa(active, since 1.10991s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:11 compute-0 ceph-mon[74327]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:11 compute-0 ceph-mon[74327]: pgmap v3: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:11 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:11 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:11 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid started
Dec 06 09:41:11 compute-0 ceph-mon[74327]: OSD bench result of 3012.211775 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 09:41:11 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040] boot
Dec 06 09:41:11 compute-0 trusting_panini[88692]: Option GRAFANA_API_PASSWORD updated
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:11 compute-0 systemd[1]: libpod-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope: Deactivated successfully.
Dec 06 09:41:11 compute-0 podman[88670]: 2025-12-06 09:41:11.312323549 +0000 UTC m=+0.613636487 container died 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec 06 09:41:11 compute-0 sudo[88793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:11 compute-0 sudo[88793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:11 compute-0 sudo[88793]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f12ebb33d966a6447e3fa75371c8972853edd8a1190c4ab1e7a92ec04a5881-merged.mount: Deactivated successfully.
Dec 06 09:41:11 compute-0 podman[88670]: 2025-12-06 09:41:11.358305133 +0000 UTC m=+0.659618071 container remove 90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f (image=quay.io/ceph/ceph:v19, name=trusting_panini, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec 06 09:41:11 compute-0 systemd[1]: libpod-conmon-90e266261e600d2256c080d83eb351fd92acee66b8dce287bc228e1d59df6c0f.scope: Deactivated successfully.
Dec 06 09:41:11 compute-0 sudo[88634]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.463917732s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.463890076s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.172164917s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.773426533s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.773363113s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481727600s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.470203400s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.470160484s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178733826s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772573471s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772561550s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481193542s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772043228s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.772028446s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469065666s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469051361s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177986145s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468970776s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469011784s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.469001770s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178039551s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468954563s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177993774s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.985382080s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468731880s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468718529s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.177909851s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.985312939s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.694526672s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468796253s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771787167s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468785763s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178054810s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771754265s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.481040955s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771402836s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.771388054s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.480850220s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468976498s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=25/17 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468938828s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178504944s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468895435s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468878746s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178489685s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468868732s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468836308s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468852043s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178512573s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468826771s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468916893s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765595436s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468838692s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765582561s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475387573s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=29/30 n=0 ec=27/19 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468828678s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178642273s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=24/15 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=4.468902588s) [2] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.178657532s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765411377s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:41:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=24/25 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=7.765394211s) [2] r=-1 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.475349426s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:41:11 compute-0 sudo[88831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:41:11 compute-0 sudo[88831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:11 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 09:41:11 compute-0 sudo[88890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmeafazkhfnrtmmtilyowfqdeommmata ; /usr/bin/python3'
Dec 06 09:41:11 compute-0 sudo[88890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:11 compute-0 python3[88894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:11 compute-0 podman[88910]: 2025-12-06 09:41:11.782316185 +0000 UTC m=+0.045273582 container create c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Dec 06 09:41:11 compute-0 systemd[1]: Started libpod-conmon-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope.
Dec 06 09:41:11 compute-0 podman[88910]: 2025-12-06 09:41:11.7609849 +0000 UTC m=+0.023942297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:11 compute-0 podman[88910]: 2025-12-06 09:41:11.87834214 +0000 UTC m=+0.141299587 container init c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 09:41:11 compute-0 podman[88910]: 2025-12-06 09:41:11.891331951 +0000 UTC m=+0.154289358 container start c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:11 compute-0 podman[88910]: 2025-12-06 09:41:11.895912626 +0000 UTC m=+0.158870033 container attach c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:41:11 compute-0 sudo[88831]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 compute-0 sudo[88945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:12 compute-0 sudo[88945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:12 compute-0 sudo[88945]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 compute-0 sudo[88987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 06 09:41:12 compute-0 sudo[88987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Bus STARTING
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='client.14337 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:12 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Client ('192.168.122.100', 54474) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mgrmap e17: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: osd.2 [v2:192.168.122.102:6800/709563040,v1:192.168.122.102:6801/709563040] boot
Dec 06 09:41:12 compute-0 ceph-mon[74327]: osdmap e35: 3 total, 3 up, 3 in
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:12 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:11] ENGINE Bus STARTED
Dec 06 09:41:12 compute-0 ceph-mon[74327]: pgmap v5: 131 pgs: 87 active+clean, 44 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 nervous_satoshi[88927]: Option ALERTMANAGER_API_HOST updated
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:41:12 compute-0 systemd[1]: libpod-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope: Deactivated successfully.
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:41:12 compute-0 podman[88910]: 2025-12-06 09:41:12.315814858 +0000 UTC m=+0.578772225 container died c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 06 09:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-988532eccd19b6a5d3c60c743a657ff11fa277ebda444ac03330415b43c609f1-merged.mount: Deactivated successfully.
Dec 06 09:41:12 compute-0 podman[88910]: 2025-12-06 09:41:12.36522951 +0000 UTC m=+0.628186887 container remove c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb (image=quay.io/ceph/ceph:v19, name=nervous_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 06 09:41:12 compute-0 systemd[1]: libpod-conmon-c1caf1ac90d8bd74cdc0c1d7f0cb7bb22846ae23b336c52dc3c7b9738cf52deb.scope: Deactivated successfully.
Dec 06 09:41:12 compute-0 sudo[88890]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec 06 09:41:12 compute-0 sudo[88987]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:12 compute-0 sudo[89066]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdsasqocwswzmfqkiipmoflomziywhgl ; /usr/bin/python3'
Dec 06 09:41:12 compute-0 sudo[89066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:12 compute-0 python3[89068]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:12 compute-0 podman[89069]: 2025-12-06 09:41:12.742450284 +0000 UTC m=+0.043199357 container create 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:41:12 compute-0 systemd[1]: Started libpod-conmon-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope.
Dec 06 09:41:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:12 compute-0 podman[89069]: 2025-12-06 09:41:12.724809395 +0000 UTC m=+0.025558468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:12 compute-0 podman[89069]: 2025-12-06 09:41:12.839098808 +0000 UTC m=+0.139847891 container init 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:12 compute-0 podman[89069]: 2025-12-06 09:41:12.84801889 +0000 UTC m=+0.148767943 container start 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 09:41:12 compute-0 podman[89069]: 2025-12-06 09:41:12.851460889 +0000 UTC m=+0.152209952 container attach 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 09:41:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:12 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:13 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec 06 09:41:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 admiring_hamilton[89085]: Option PROMETHEUS_API_HOST updated
Dec 06 09:41:13 compute-0 systemd[1]: libpod-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope: Deactivated successfully.
Dec 06 09:41:13 compute-0 podman[89069]: 2025-12-06 09:41:13.232832094 +0000 UTC m=+0.533581147 container died 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d61af60dc2f8f4f92b4f8bc84db59b9cadb1ff35ec84d33f061655cb452eb60-merged.mount: Deactivated successfully.
Dec 06 09:41:13 compute-0 podman[89069]: 2025-12-06 09:41:13.280768739 +0000 UTC m=+0.581517792 container remove 4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870 (image=quay.io/ceph/ceph:v19, name=admiring_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: osdmap e36: 3 total, 3 up, 3 in
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:13 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:13 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:13 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:13 compute-0 systemd[1]: libpod-conmon-4426762ab3bab132be4afa220329a9e57f4a68a3dfef754ec78ca18bd046f870.scope: Deactivated successfully.
Dec 06 09:41:13 compute-0 sudo[89066]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v7: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:13 compute-0 sudo[89144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytwmpwdaztrkcvhteslwftgrmecqgvs ; /usr/bin/python3'
Dec 06 09:41:13 compute-0 sudo[89144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:13 compute-0 python3[89146]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:13 compute-0 podman[89147]: 2025-12-06 09:41:13.840255314 +0000 UTC m=+0.058424898 container create b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:41:13 compute-0 systemd[1]: Started libpod-conmon-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope.
Dec 06 09:41:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:13 compute-0 podman[89147]: 2025-12-06 09:41:13.823128173 +0000 UTC m=+0.041297727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:13 compute-0 podman[89147]: 2025-12-06 09:41:13.930162796 +0000 UTC m=+0.148332360 container init b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:13 compute-0 podman[89147]: 2025-12-06 09:41:13.938638983 +0000 UTC m=+0.156808527 container start b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:41:13 compute-0 podman[89147]: 2025-12-06 09:41:13.944984244 +0000 UTC m=+0.163153888 container attach b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:41:13 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:14 compute-0 ceph-mon[74327]: 5.1a scrub starts
Dec 06 09:41:14 compute-0 ceph-mon[74327]: 5.1a scrub ok
Dec 06 09:41:14 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:14 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:14 compute-0 ceph-mon[74327]: from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:14 compute-0 ceph-mon[74327]: pgmap v7: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:14 compute-0 ceph-mon[74327]: 2.13 scrub starts
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mgrmap e18: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:14 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:14 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24160 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 06 09:41:14 compute-0 sudo[89185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:41:14 compute-0 sudo[89185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:14 compute-0 dazzling_chaplygin[89162]: Option GRAFANA_API_URL updated
Dec 06 09:41:14 compute-0 sudo[89185]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 systemd[1]: libpod-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope: Deactivated successfully.
Dec 06 09:41:14 compute-0 podman[89147]: 2025-12-06 09:41:14.401496513 +0000 UTC m=+0.619666067 container died b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbc1c83bca79ca8cd8e6669acb7aef16c4b14abbc4661c8ff537a798c8da2932-merged.mount: Deactivated successfully.
Dec 06 09:41:14 compute-0 podman[89147]: 2025-12-06 09:41:14.438048659 +0000 UTC m=+0.656218233 container remove b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405 (image=quay.io/ceph/ceph:v19, name=dazzling_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:14 compute-0 systemd[1]: libpod-conmon-b7233b3b6570968434008feb73c061773ee0c1834e13b181c1f79b80c51e1405.scope: Deactivated successfully.
Dec 06 09:41:14 compute-0 sudo[89212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:41:14 compute-0 sudo[89212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89212]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 sudo[89144]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 sudo[89249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:14 compute-0 sudo[89249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89249]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 sudo[89274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:14 compute-0 sudo[89274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89274]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 sudo[89342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjruizohbzzvpwlbnkwbicdjdvkekxgu ; /usr/bin/python3'
Dec 06 09:41:14 compute-0 sudo[89342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:14 compute-0 sudo[89304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:14 compute-0 sudo[89304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89304]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 python3[89347]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:14 compute-0 sudo[89373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:14 compute-0 sudo[89373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89373]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 podman[89380]: 2025-12-06 09:41:14.875588949 +0000 UTC m=+0.062539148 container create 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:14 compute-0 systemd[1]: Started libpod-conmon-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope.
Dec 06 09:41:14 compute-0 sudo[89411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:14 compute-0 sudo[89411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:14 compute-0 sudo[89411]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 compute-0 podman[89380]: 2025-12-06 09:41:14.847379387 +0000 UTC m=+0.034329666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:14 compute-0 podman[89380]: 2025-12-06 09:41:14.975886599 +0000 UTC m=+0.162836818 container init 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:41:14 compute-0 podman[89380]: 2025-12-06 09:41:14.987647781 +0000 UTC m=+0.174597990 container start 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:14 compute-0 podman[89380]: 2025-12-06 09:41:14.991579225 +0000 UTC m=+0.178529444 container attach 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:15 compute-0 sudo[89441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 09:41:15 compute-0 sudo[89441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89441]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 sudo[89467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:15 compute-0 sudo[89467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89467]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 sudo[89502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:15 compute-0 sudo[89502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89502]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:15 compute-0 sudo[89536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:15 compute-0 sudo[89536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89536]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mon[74327]: 2.13 scrub ok
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:15 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:15 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:41:15 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='client.24160 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:15 compute-0 ceph-mon[74327]: from='mgr.24116 192.168.122.100:0/4088948354' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v8: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 09:41:15 compute-0 sudo[89561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:15 compute-0 sudo[89561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89561]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 06 09:41:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 06 09:41:15 compute-0 sudo[89586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:15 compute-0 sudo[89586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89586]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 sudo[89635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:15 compute-0 sudo[89635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89635]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 sudo[89660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:15 compute-0 sudo[89660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89660]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 sudo[89685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:15 compute-0 sudo[89685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89685]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:15 compute-0 sudo[89710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:41:15 compute-0 sudo[89710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89710]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 sudo[89735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:41:15 compute-0 sudo[89735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89735]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 compute-0 sudo[89760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:15 compute-0 sudo[89760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:15 compute-0 sudo[89760]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 sudo[89785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:16 compute-0 sudo[89785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:16 compute-0 sudo[89785]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 sudo[89810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:16 compute-0 sudo[89810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:16 compute-0 sudo[89810]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 sudo[89858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:16 compute-0 sudo[89858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:16 compute-0 sudo[89858]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 ceph-mon[74327]: 2.d scrub starts
Dec 06 09:41:16 compute-0 ceph-mon[74327]: 2.d scrub ok
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:16 compute-0 ceph-mon[74327]: pgmap v8: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 09:41:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 06 09:41:16 compute-0 ceph-mon[74327]: 2.10 scrub starts
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:16 compute-0 ceph-mon[74327]: 2.10 scrub ok
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:16 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec 06 09:41:16 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.qhdjwa(active, since 7s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:16 compute-0 sudo[89883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:16 compute-0 sudo[89883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:16 compute-0 sudo[89883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 systemd[1]: libpod-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope: Deactivated successfully.
Dec 06 09:41:16 compute-0 podman[89380]: 2025-12-06 09:41:16.380789496 +0000 UTC m=+1.567739695 container died 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dd15d42d64ac9b6802fc55783bb78af269e2726804625093cb7ce8ccb95b376-merged.mount: Deactivated successfully.
Dec 06 09:41:16 compute-0 podman[89380]: 2025-12-06 09:41:16.419109347 +0000 UTC m=+1.606059546 container remove 002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32 (image=quay.io/ceph/ceph:v19, name=loving_cannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:16 compute-0 systemd[1]: libpod-conmon-002e3b359654679e9287b42a11ca9ef55af824ce9bb54298cf39a69bf56c3e32.scope: Deactivated successfully.
Dec 06 09:41:16 compute-0 sudo[89342]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 compute-0 sshd-session[88489]: Read error from remote host 192.168.122.100 port 56302: Connection reset by peer
Dec 06 09:41:16 compute-0 sshd-session[88475]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:41:16 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 06 09:41:16 compute-0 systemd[1]: session-34.scope: Consumed 4.351s CPU time.
Dec 06 09:41:16 compute-0 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Dec 06 09:41:16 compute-0 systemd-logind[795]: Removed session 34.
Dec 06 09:41:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec 06 09:41:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:41:16 compute-0 sudo[89964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npadshkflanztzktgcjgliirblsdbcru ; /usr/bin/python3'
Dec 06 09:41:16 compute-0 sudo[89964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:41:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:16.648+0000 7f0044f10140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:16 compute-0 python3[89966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:16.725+0000 7f0044f10140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:41:16 compute-0 podman[89967]: 2025-12-06 09:41:16.795613218 +0000 UTC m=+0.056832858 container create 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:41:16 compute-0 systemd[1]: Started libpod-conmon-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope.
Dec 06 09:41:16 compute-0 podman[89967]: 2025-12-06 09:41:16.779838279 +0000 UTC m=+0.041057939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:16 compute-0 podman[89967]: 2025-12-06 09:41:16.905225673 +0000 UTC m=+0.166445363 container init 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:16 compute-0 podman[89967]: 2025-12-06 09:41:16.929601012 +0000 UTC m=+0.190820652 container start 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:16 compute-0 podman[89967]: 2025-12-06 09:41:16.932898947 +0000 UTC m=+0.194118647 container attach 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec 06 09:41:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 06 09:41:17 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/986641805' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 06 09:41:17 compute-0 ceph-mon[74327]: mgrmap e19: compute-0.qhdjwa(active, since 7s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:17 compute-0 ceph-mon[74327]: 3.1d scrub starts
Dec 06 09:41:17 compute-0 ceph-mon[74327]: 3.1d scrub ok
Dec 06 09:41:17 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 06 09:41:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 06 09:41:17 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.qhdjwa(active, since 8s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:17 compute-0 systemd[1]: libpod-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope: Deactivated successfully.
Dec 06 09:41:17 compute-0 podman[89967]: 2025-12-06 09:41:17.420738407 +0000 UTC m=+0.681958047 container died 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-40482bbf5ac643c9af361da303b180aeafa4a78fddb6f72c874c3529992460a4-merged.mount: Deactivated successfully.
Dec 06 09:41:17 compute-0 podman[89967]: 2025-12-06 09:41:17.460833864 +0000 UTC m=+0.722053504 container remove 1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8 (image=quay.io/ceph/ceph:v19, name=nostalgic_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:41:17 compute-0 sudo[89964]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:17 compute-0 systemd[1]: libpod-conmon-1107ababf912b5e536fbe83be686c28b602e4b430be7f517dac9b14e7aa3eff8.scope: Deactivated successfully.
Dec 06 09:41:17 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:41:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:17.579+0000 7f0044f10140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:17 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:17 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.217+0000 7f0044f10140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:41:18 compute-0 python3[90107]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.369+0000 7f0044f10140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:41:18 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2772325777' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 06 09:41:18 compute-0 ceph-mon[74327]: mgrmap e20: compute-0.qhdjwa(active, since 8s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:18 compute-0 ceph-mon[74327]: 2.15 scrub starts
Dec 06 09:41:18 compute-0 ceph-mon[74327]: 2.15 scrub ok
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.436+0000 7f0044f10140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:41:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:18.563+0000 7f0044f10140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:41:18 compute-0 python3[90178]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014077.9581857-37343-69206165590557/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:41:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:41:19 compute-0 sudo[90226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vibjvorjnvglzhohrxkigawqyghkjqma ; /usr/bin/python3'
Dec 06 09:41:19 compute-0 sudo[90226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:19 compute-0 python3[90228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:41:19 compute-0 podman[90229]: 2025-12-06 09:41:19.298769859 +0000 UTC m=+0.050260950 container create e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:41:19 compute-0 systemd[1]: Started libpod-conmon-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope.
Dec 06 09:41:19 compute-0 podman[90229]: 2025-12-06 09:41:19.277684473 +0000 UTC m=+0.029175594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:19 compute-0 podman[90229]: 2025-12-06 09:41:19.4013035 +0000 UTC m=+0.152794611 container init e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:19 compute-0 podman[90229]: 2025-12-06 09:41:19.409418737 +0000 UTC m=+0.160909838 container start e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:19 compute-0 ceph-mon[74327]: 3.1a scrub starts
Dec 06 09:41:19 compute-0 ceph-mon[74327]: 3.1a scrub ok
Dec 06 09:41:19 compute-0 podman[90229]: 2025-12-06 09:41:19.417537603 +0000 UTC m=+0.169028695 container attach e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.548+0000 7f0044f10140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:41:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.758+0000 7f0044f10140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:41:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.830+0000 7f0044f10140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:41:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.893+0000 7f0044f10140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:41:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:19.967+0000 7f0044f10140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:41:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.036+0000 7f0044f10140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:41:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.368+0000 7f0044f10140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:41:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.476+0000 7f0044f10140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:41:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:20.922+0000 7f0044f10140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:41:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.476+0000 7f0044f10140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:41:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.553+0000 7f0044f10140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:41:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.641+0000 7f0044f10140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:41:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.799+0000 7f0044f10140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:41:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:21.870+0000 7f0044f10140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.031+0000 7f0044f10140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.280+0000 7f0044f10140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.qhdjwa(active, since 13s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:22 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:22 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.553+0000 7f0044f10140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.643+0000 7f0044f10140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x559abcb13860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 06 09:41:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 06 09:41:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.qhdjwa(active, starting, since 0.0444664s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.895+0000 7f5366bee140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:41:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:22.977+0000 7f5366bee140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:41:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec 06 09:41:23 compute-0 ceph-mon[74327]: mgrmap e21: compute-0.qhdjwa(active, since 13s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:23 compute-0 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:23 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:23 compute-0 ceph-mon[74327]: osdmap e37: 3 total, 3 up, 3 in
Dec 06 09:41:23 compute-0 ceph-mon[74327]: mgrmap e22: compute-0.qhdjwa(active, starting, since 0.0444664s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:23 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:23 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid started
Dec 06 09:41:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.qhdjwa(active, starting, since 1.05925s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:23 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:41:23 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:23.832+0000 7f5366bee140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:41:23 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.446+0000 7f5366bee140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.611+0000 7f5366bee140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.681+0000 7f5366bee140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:41:24 compute-0 ceph-mon[74327]: mgrmap e23: compute-0.qhdjwa(active, starting, since 1.05925s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:41:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:24.809+0000 7f5366bee140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:41:24 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:41:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:41:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:25.825+0000 7f5366bee140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:41:25 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.053+0000 7f5366bee140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.134+0000 7f5366bee140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.203+0000 7f5366bee140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.281+0000 7f5366bee140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.364+0000 7f5366bee140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:41:26 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 06 09:41:26 compute-0 systemd[75653]: Activating special unit Exit the Session...
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped target Main User Target.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped target Basic System.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped target Paths.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped target Sockets.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped target Timers.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 09:41:26 compute-0 systemd[75653]: Closed D-Bus User Message Bus Socket.
Dec 06 09:41:26 compute-0 systemd[75653]: Stopped Create User's Volatile Files and Directories.
Dec 06 09:41:26 compute-0 systemd[75653]: Removed slice User Application Slice.
Dec 06 09:41:26 compute-0 systemd[75653]: Reached target Shutdown.
Dec 06 09:41:26 compute-0 systemd[75653]: Finished Exit the Session.
Dec 06 09:41:26 compute-0 systemd[75653]: Reached target Exit the Session.
Dec 06 09:41:26 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 06 09:41:26 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 06 09:41:26 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 06 09:41:26 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 06 09:41:26 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 06 09:41:26 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 06 09:41:26 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 06 09:41:26 compute-0 systemd[1]: user-42477.slice: Consumed 31.408s CPU time.
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.727+0000 7f5366bee140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:41:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:26.818+0000 7f5366bee140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:41:26 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:41:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.218+0000 7f5366bee140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:41:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.784+0000 7f5366bee140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:41:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.849+0000 7f5366bee140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:41:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:27.921+0000 7f5366bee140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:41:27 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.063+0000 7f5366bee140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.132+0000 7f5366bee140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.286+0000 7f5366bee140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.515+0000 7f5366bee140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.783+0000 7f5366bee140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.qhdjwa(active, starting, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:28 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:41:28 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn started
Dec 06 09:41:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:41:28.891+0000 7f5366bee140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x56090a73d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.qhdjwa(active, starting, since 0.0265623s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Starting
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:41:28
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [progress INFO root] Loading...
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f52e9783c40>, <progress.module.GhostEvent object at 0x7f52e9783f10>, <progress.module.GhostEvent object at 0x7f52e9783f40>, <progress.module.GhostEvent object at 0x7f52e9783f70>, <progress.module.GhostEvent object at 0x7f52e9783fa0>, <progress.module.GhostEvent object at 0x7f52e9783fd0>, <progress.module.GhostEvent object at 0x7f52e979d040>, <progress.module.GhostEvent object at 0x7f52e979d070>, <progress.module.GhostEvent object at 0x7f52e979d0a0>, <progress.module.GhostEvent object at 0x7f52e979d0d0>] historic events
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec 06 09:41:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec 06 09:41:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:28 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 06 09:41:29 compute-0 sshd-session[90417]: Accepted publickey for ceph-admin from 192.168.122.100 port 42686 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:41:29 compute-0 systemd-logind[795]: New session 35 of user ceph-admin.
Dec 06 09:41:29 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 09:41:29 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 09:41:29 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec 06 09:41:29 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 06 09:41:29 compute-0 systemd[90433]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:41:29 compute-0 systemd[90433]: Queued start job for default target Main User Target.
Dec 06 09:41:29 compute-0 systemd[90433]: Created slice User Application Slice.
Dec 06 09:41:29 compute-0 systemd[90433]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:41:29 compute-0 systemd[90433]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:41:29 compute-0 systemd[90433]: Reached target Paths.
Dec 06 09:41:29 compute-0 systemd[90433]: Reached target Timers.
Dec 06 09:41:29 compute-0 systemd[90433]: Starting D-Bus User Message Bus Socket...
Dec 06 09:41:29 compute-0 systemd[90433]: Starting Create User's Volatile Files and Directories...
Dec 06 09:41:29 compute-0 systemd[90433]: Finished Create User's Volatile Files and Directories.
Dec 06 09:41:29 compute-0 systemd[90433]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:41:29 compute-0 systemd[90433]: Reached target Sockets.
Dec 06 09:41:29 compute-0 systemd[90433]: Reached target Basic System.
Dec 06 09:41:29 compute-0 systemd[90433]: Reached target Main User Target.
Dec 06 09:41:29 compute-0 systemd[90433]: Startup finished in 122ms.
Dec 06 09:41:29 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 06 09:41:29 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 06 09:41:29 compute-0 sshd-session[90417]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:41:29 compute-0 sudo[90449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:29 compute-0 sudo[90449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:29 compute-0 sudo[90449]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:29 compute-0 sudo[90474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:41:29 compute-0 sudo[90474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mgrmap e24: compute-0.qhdjwa(active, starting, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:29 compute-0 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:41:29 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:41:29 compute-0 ceph-mon[74327]: osdmap e38: 3 total, 3 up, 3 in
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mgrmap e25: compute-0.qhdjwa(active, starting, since 0.0265623s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.qhdjwa(active, since 1.06111s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14391 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 06 09:41:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0[74323]: 2025-12-06T09:41:29.966+0000 7f374f329640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e2 new map
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-06T09:41:29:967825+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:29.967778+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 06 09:41:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:29 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 06 09:41:30 compute-0 systemd[1]: libpod-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope: Deactivated successfully.
Dec 06 09:41:30 compute-0 podman[90511]: 2025-12-06 09:41:30.09590233 +0000 UTC m=+0.039280502 container died e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec 06 09:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b34b16967754a5a79087b9d7bb9fc049c609344b7371090995fd6861896bc1de-merged.mount: Deactivated successfully.
Dec 06 09:41:30 compute-0 podman[90511]: 2025-12-06 09:41:30.149915808 +0000 UTC m=+0.093293980 container remove e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2 (image=quay.io/ceph/ceph:v19, name=sweet_nobel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:41:30 compute-0 systemd[1]: libpod-conmon-e2aee9c20db80f9fc2d01a3c80b03bff6f5a000d2199e1f11d4419fa60e7e0f2.scope: Deactivated successfully.
Dec 06 09:41:30 compute-0 sudo[90226]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec 06 09:41:30 compute-0 sudo[90614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auegcvtsyvzmpthpzycdkwavdrozfytc ; /usr/bin/python3'
Dec 06 09:41:30 compute-0 sudo[90614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:30 compute-0 podman[90633]: 2025-12-06 09:41:30.48944357 +0000 UTC m=+0.072556515 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:30 compute-0 python3[90618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:30 compute-0 podman[90656]: 2025-12-06 09:41:30.559567706 +0000 UTC m=+0.055044331 container create 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:41:30 compute-0 podman[90633]: 2025-12-06 09:41:30.585925339 +0000 UTC m=+0.169038244 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:41:30 compute-0 systemd[1]: Started libpod-conmon-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope.
Dec 06 09:41:30 compute-0 podman[90656]: 2025-12-06 09:41:30.529468665 +0000 UTC m=+0.024945330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:30 compute-0 podman[90656]: 2025-12-06 09:41:30.656172179 +0000 UTC m=+0.151648784 container init 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 podman[90656]: 2025-12-06 09:41:30.663989937 +0000 UTC m=+0.159466522 container start 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:30 compute-0 podman[90656]: 2025-12-06 09:41:30.667737945 +0000 UTC m=+0.163214530 container attach 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:41:30 compute-0 sudo[90474]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mgrmap e26: compute-0.qhdjwa(active, since 1.06111s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 06 09:41:30 compute-0 ceph-mon[74327]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 06 09:41:30 compute-0 ceph-mon[74327]: osdmap e39: 3 total, 3 up, 3 in
Dec 06 09:41:30 compute-0 ceph-mon[74327]: fsmap cephfs:0
Dec 06 09:41:30 compute-0 ceph-mon[74327]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Bus STARTING
Dec 06 09:41:30 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:41:30 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Client ('192.168.122.100', 59318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:41:30 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid restarted
Dec 06 09:41:30 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid started
Dec 06 09:41:30 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:41:30 compute-0 ceph-mon[74327]: [06/Dec/2025:09:41:30] ENGINE Bus STARTED
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:30 compute-0 sudo[90761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:31 compute-0 sudo[90761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:31 compute-0 sudo[90761]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:31 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:41:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 great_curie[90681]: Scheduled mds.cephfs update...
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 09:41:31 compute-0 systemd[1]: libpod-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope: Deactivated successfully.
Dec 06 09:41:31 compute-0 podman[90656]: 2025-12-06 09:41:31.080019477 +0000 UTC m=+0.575496062 container died 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:41:31 compute-0 sudo[90791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:41:31 compute-0 sudo[90791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-911e2a99adf09573c6a23d7c2cd13a5492054047244274e562403b36ae89a199-merged.mount: Deactivated successfully.
Dec 06 09:41:31 compute-0 podman[90656]: 2025-12-06 09:41:31.121917712 +0000 UTC m=+0.617394307 container remove 1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4 (image=quay.io/ceph/ceph:v19, name=great_curie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:41:31 compute-0 systemd[1]: libpod-conmon-1b1626270134ec769da2721ff3e2ae0ef95a30863be653c311053c643c1695e4.scope: Deactivated successfully.
Dec 06 09:41:31 compute-0 sudo[90614]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:31 compute-0 sudo[90866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmykpjwpiqeatvedyxtjwojpbcynwbf ; /usr/bin/python3'
Dec 06 09:41:31 compute-0 sudo[90866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:31 compute-0 python3[90871]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:31 compute-0 podman[90879]: 2025-12-06 09:41:31.511937709 +0000 UTC m=+0.037011940 container create 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:31 compute-0 systemd[1]: Started libpod-conmon-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope.
Dec 06 09:41:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:31 compute-0 podman[90879]: 2025-12-06 09:41:31.495000304 +0000 UTC m=+0.020074565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:31 compute-0 podman[90879]: 2025-12-06 09:41:31.599329792 +0000 UTC m=+0.124404083 container init 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:31 compute-0 podman[90879]: 2025-12-06 09:41:31.606659494 +0000 UTC m=+0.131733755 container start 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:41:31 compute-0 podman[90879]: 2025-12-06 09:41:31.611457765 +0000 UTC m=+0.136532026 container attach 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:41:31 compute-0 sudo[90791]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:31 compute-0 sudo[90912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:31 compute-0 sudo[90912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:31 compute-0 sudo[90912]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:31 compute-0 sudo[90947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 06 09:41:31 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 09:41:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:31 compute-0 ceph-mon[74327]: pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:31 compute-0 ceph-mon[74327]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:31 compute-0 ceph-mon[74327]: mgrmap e27: compute-0.qhdjwa(active, since 2s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:31 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:41:32 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 sudo[91000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:41:32 compute-0 sudo[91000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 sudo[91000]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 sudo[91025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:41:32 compute-0 sudo[91025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 sudo[91025]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 sudo[91050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:32 compute-0 sudo[91050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 sudo[91050]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 sudo[91075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:32 compute-0 sudo[91075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 sudo[91075]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 sudo[91100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:32 compute-0 sudo[91100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 sudo[91100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 06 09:41:32 compute-0 sudo[91148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 sudo[91148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:41:32 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91148]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 06 09:41:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 06 09:41:33 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 06 09:41:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec 06 09:41:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:41:33 compute-0 sudo[91173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91198]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:33 compute-0 sudo[91223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:33 compute-0 sudo[91248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91248]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:33 compute-0 sudo[91273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91273]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:33 compute-0 sudo[91298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:33 compute-0 sudo[91298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91298]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:33 compute-0 sudo[91323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91323]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:33 compute-0 sudo[91371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91371]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:33 compute-0 sudo[91396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:41:33 compute-0 sudo[91396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91396]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:33 compute-0 sudo[91421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91421]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:33 compute-0 sudo[91446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:41:33 compute-0 sudo[91446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91446]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:41:33 compute-0 sudo[91471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91471]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:33 compute-0 sudo[91496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91496]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 compute-0 sudo[91521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:33 compute-0 sudo[91521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:33 compute-0 sudo[91521]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 06 09:41:34 compute-0 ceph-mon[74327]: pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 06 09:41:34 compute-0 ceph-mon[74327]: osdmap e40: 3 total, 3 up, 3 in
Dec 06 09:41:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 06 09:41:34 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:34 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:34 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mgrmap e28: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:34 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 sudo[91546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91546]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 systemd[1]: libpod-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope: Deactivated successfully.
Dec 06 09:41:34 compute-0 podman[91606]: 2025-12-06 09:41:34.177503074 +0000 UTC m=+0.024787735 container died 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:34 compute-0 sudo[91605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec05436ebdb698f5e908365bdbba6fad3ff748ad35383fd102d8ab48ac0daa5b-merged.mount: Deactivated successfully.
Dec 06 09:41:34 compute-0 sudo[91605]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 podman[91606]: 2025-12-06 09:41:34.217978273 +0000 UTC m=+0.065262914 container remove 632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1 (image=quay.io/ceph/ceph:v19, name=infallible_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:41:34 compute-0 systemd[1]: libpod-conmon-632a78f140b1ef82ead610dc02e0059d3299783968b922c43049035ae8b701e1.scope: Deactivated successfully.
Dec 06 09:41:34 compute-0 sudo[90866]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91643]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 sudo[91668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91668]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 sudo[91693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:34 compute-0 sudo[91693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91693]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:41:34 compute-0 sudo[91718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:34 compute-0 sudo[91768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:34 compute-0 sudo[91768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 sudo[91768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 sudo[91819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 sudo[91916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byglkebgxwvoqszodfujkcfrrdpsqsdg ; /usr/bin/python3'
Dec 06 09:41:34 compute-0 sudo[91916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:34 compute-0 sudo[91917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91917]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:41:34 compute-0 sudo[91944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91944]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 python3[91931]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 09:41:34 compute-0 sudo[91916]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 sudo[91969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:34 compute-0 sudo[91969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:34 compute-0 sudo[91969]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 06 09:41:35 compute-0 ceph-mon[74327]: osdmap e41: 3 total, 3 up, 3 in
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 06 09:41:35 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 06 09:41:35 compute-0 sudo[92064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icowjvdximkhxvdmimwmrjoktyucucru ; /usr/bin/python3'
Dec 06 09:41:35 compute-0 sudo[92064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:35 compute-0 python3[92066]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014094.6265876-37374-18139629387268/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=944de880f37676f80f6e04a4864888bf3f7decbf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:35 compute-0 sudo[92064]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:41:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:35 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3))
Dec 06 09:41:35 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec 06 09:41:35 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec 06 09:41:35 compute-0 sudo[92081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:35 compute-0 sudo[92081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:35 compute-0 sudo[92081]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:35 compute-0 sudo[92116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:35 compute-0 sudo[92116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:35 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:35 compute-0 sudo[92164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amqiqyogwiwkggowenmtwesdbdojdjee ; /usr/bin/python3'
Dec 06 09:41:35 compute-0 sudo[92164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:35 compute-0 python3[92166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:35 compute-0 podman[92190]: 2025-12-06 09:41:35.946541431 +0000 UTC m=+0.056939672 container create 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:35 compute-0 systemd[1]: Started libpod-conmon-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope.
Dec 06 09:41:35 compute-0 systemd[1]: Reloading.
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:35.923903045 +0000 UTC m=+0.034301306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:36 compute-0 systemd-rc-local-generator[92254]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:36 compute-0 systemd-sysv-generator[92257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:36 compute-0 ceph-mon[74327]: pgmap v9: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 09:41:36 compute-0 ceph-mon[74327]: osdmap e42: 3 total, 3 up, 3 in
Dec 06 09:41:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:36 compute-0 ceph-mon[74327]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 06 09:41:36 compute-0 ceph-mon[74327]: mgrmap e29: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:41:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:36.248914198 +0000 UTC m=+0.359312459 container init 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:36.257713426 +0000 UTC m=+0.368111667 container start 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:36.26130255 +0000 UTC m=+0.371700801 container attach 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:41:36 compute-0 systemd[1]: Reloading.
Dec 06 09:41:36 compute-0 systemd-sysv-generator[92298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:36 compute-0 systemd-rc-local-generator[92295]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:36 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:41:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 06 09:41:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 06 09:41:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 06 09:41:36 compute-0 systemd[1]: libpod-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope: Deactivated successfully.
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:36.7450313 +0000 UTC m=+0.855429581 container died 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-590daf0929948d12f51863f1bed825aa95e20812ca7a613eadc2e28f1ca041df-merged.mount: Deactivated successfully.
Dec 06 09:41:36 compute-0 podman[92190]: 2025-12-06 09:41:36.791920262 +0000 UTC m=+0.902318523 container remove 7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7 (image=quay.io/ceph/ceph:v19, name=eager_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:41:36 compute-0 systemd[1]: libpod-conmon-7d9adae41c89b431bb3048a31230bb45dbebd0e6a416b2306069ee9bb7e9bdc7.scope: Deactivated successfully.
Dec 06 09:41:36 compute-0 bash[92371]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 06 09:41:36 compute-0 sudo[92164]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 09:41:37 compute-0 bash[92371]: Getting image source signatures
Dec 06 09:41:37 compute-0 bash[92371]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 06 09:41:37 compute-0 bash[92371]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 06 09:41:37 compute-0 bash[92371]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 06 09:41:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 06 09:41:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/351927990' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 06 09:41:37 compute-0 sudo[92462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nasziyqtjlevosiobrxnajtjrkqkaetu ; /usr/bin/python3'
Dec 06 09:41:37 compute-0 sudo[92462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:37 compute-0 python3[92464]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:37 compute-0 bash[92371]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 06 09:41:37 compute-0 bash[92371]: Writing manifest to image destination
Dec 06 09:41:37 compute-0 podman[92466]: 2025-12-06 09:41:37.757621796 +0000 UTC m=+0.145972405 container create 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:37 compute-0 podman[92371]: 2025-12-06 09:41:37.789635938 +0000 UTC m=+1.036713850 container create 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:41:37 compute-0 systemd[1]: Started libpod-conmon-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope.
Dec 06 09:41:37 compute-0 podman[92371]: 2025-12-06 09:41:37.77323374 +0000 UTC m=+1.020311682 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 06 09:41:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddccd74815bd653ea0db1a678ef9c01e55697a06a91ab1f9e3536113257628cf/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:37 compute-0 podman[92371]: 2025-12-06 09:41:37.830598803 +0000 UTC m=+1.077676735 container init 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:41:37 compute-0 podman[92466]: 2025-12-06 09:41:37.836641953 +0000 UTC m=+0.224992592 container init 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:37 compute-0 podman[92371]: 2025-12-06 09:41:37.836945784 +0000 UTC m=+1.084023696 container start 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:41:37 compute-0 podman[92466]: 2025-12-06 09:41:37.741455546 +0000 UTC m=+0.129806185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:37 compute-0 bash[92371]: 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333
Dec 06 09:41:37 compute-0 podman[92466]: 2025-12-06 09:41:37.842942163 +0000 UTC m=+0.231292782 container start 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.845Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.845Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.846Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.846Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 06 09:41:37 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:41:37 compute-0 podman[92466]: 2025-12-06 09:41:37.846713412 +0000 UTC m=+0.235064041 container attach 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=arp
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=bcache
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.850Z caller=node_exporter.go:117 level=info collector=bonding
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=cpu
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=dmi
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=edac
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=entropy
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=filefd
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netclass
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netdev
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=netstat
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=nfs
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.851Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=nvme
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=os
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=pressure
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=rapl
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=selinux
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=softnet
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=stat
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=textfile
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=time
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=uname
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=xfs
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.852Z caller=node_exporter.go:117 level=info collector=zfs
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.853Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 06 09:41:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0[92499]: ts=2025-12-06T09:41:37.853Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 06 09:41:37 compute-0 sudo[92116]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 06 09:41:38 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec 06 09:41:38 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec 06 09:41:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 06 09:41:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824556430' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:41:38 compute-0 adoring_haibt[92495]: 
Dec 06 09:41:38 compute-0 adoring_haibt[92495]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":72,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1765014071,"num_in_osds":3,"osd_in_since":1765014049,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84193280,"bytes_avail":64327733248,"bytes_total":64411926528,"read_bytes_sec":29820,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-12-06T09:41:29:967825+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-12-06T09:40:50.551863+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.sauzid":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oazbvn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"718093b7-ae24-4ca4-868b-ad896e0c544f":{"message":"Updating node-exporter deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 06 09:41:38 compute-0 systemd[1]: libpod-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope: Deactivated successfully.
Dec 06 09:41:38 compute-0 podman[92466]: 2025-12-06 09:41:38.500563769 +0000 UTC m=+0.888914418 container died 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-099bd82de2037c329caa3e9388cd4eebd587128552f3c7ab50078257366b7227-merged.mount: Deactivated successfully.
Dec 06 09:41:38 compute-0 podman[92466]: 2025-12-06 09:41:38.552562833 +0000 UTC m=+0.940913452 container remove 74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8 (image=quay.io/ceph/ceph:v19, name=adoring_haibt, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:41:38 compute-0 systemd[1]: libpod-conmon-74a92804fa4378c049c895eb92ab3dca796c963bbc659773e283d6526f36f4c8.scope: Deactivated successfully.
Dec 06 09:41:38 compute-0 sudo[92462]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:38 compute-0 sudo[92564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehjmlxashcsrmfylzwokcpvxhhxihlwn ; /usr/bin/python3'
Dec 06 09:41:38 compute-0 sudo[92564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:38 compute-0 ceph-mon[74327]: pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 09:41:38 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/824556430' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:41:38 compute-0 python3[92566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:38 compute-0 podman[92567]: 2025-12-06 09:41:38.934979361 +0000 UTC m=+0.042707512 container create 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:41:38 compute-0 systemd[1]: Started libpod-conmon-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope.
Dec 06 09:41:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 09:41:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:39 compute-0 podman[92567]: 2025-12-06 09:41:38.918927623 +0000 UTC m=+0.026655774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:39 compute-0 podman[92567]: 2025-12-06 09:41:39.027213196 +0000 UTC m=+0.134941427 container init 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:39 compute-0 podman[92567]: 2025-12-06 09:41:39.034565028 +0000 UTC m=+0.142293219 container start 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:41:39 compute-0 podman[92567]: 2025-12-06 09:41:39.038209233 +0000 UTC m=+0.145937464 container attach 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 09:41:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917045225' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 09:41:39 compute-0 nostalgic_sutherland[92583]: 
Dec 06 09:41:39 compute-0 nostalgic_sutherland[92583]: {"epoch":3,"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","modified":"2025-12-06T09:40:20.714037Z","created":"2025-12-06T09:37:38.663870Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec 06 09:41:39 compute-0 nostalgic_sutherland[92583]: dumped monmap epoch 3
Dec 06 09:41:39 compute-0 systemd[1]: libpod-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope: Deactivated successfully.
Dec 06 09:41:39 compute-0 podman[92608]: 2025-12-06 09:41:39.58036144 +0000 UTC m=+0.020914132 container died 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 09:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6658d303c14ec7d0de36ee661266a09745ad324cdc6232a6935fddf50716fc6-merged.mount: Deactivated successfully.
Dec 06 09:41:39 compute-0 podman[92608]: 2025-12-06 09:41:39.617292628 +0000 UTC m=+0.057845340 container remove 65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f (image=quay.io/ceph/ceph:v19, name=nostalgic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:39 compute-0 systemd[1]: libpod-conmon-65363f0b8b5c187ab1d88af7bd36af6ac9bfc7782852d5ad7f3ecf7fbbb33a0f.scope: Deactivated successfully.
Dec 06 09:41:39 compute-0 sudo[92564]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:39 compute-0 ceph-mon[74327]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 06 09:41:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/917045225' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 09:41:40 compute-0 sudo[92646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ublhiomfpntctycwdcitgmgtzhlbumti ; /usr/bin/python3'
Dec 06 09:41:40 compute-0 sudo[92646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:40 compute-0 python3[92648]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.362831833 +0000 UTC m=+0.071762769 container create d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:40 compute-0 systemd[1]: Started libpod-conmon-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope.
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.332757193 +0000 UTC m=+0.041688199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.46366624 +0000 UTC m=+0.172597246 container init d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.472513819 +0000 UTC m=+0.181444755 container start d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.47789938 +0000 UTC m=+0.186830326 container attach d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:41:40 compute-0 ceph-mon[74327]: pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 09:41:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 06 09:41:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032166629' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 06 09:41:40 compute-0 angry_keller[92664]: [client.openstack]
Dec 06 09:41:40 compute-0 angry_keller[92664]:         key = AQA7+TNpAAAAABAABZDZy1tS5Qay3mTps8dAWg==
Dec 06 09:41:40 compute-0 angry_keller[92664]:         caps mgr = "allow *"
Dec 06 09:41:40 compute-0 angry_keller[92664]:         caps mon = "profile rbd"
Dec 06 09:41:40 compute-0 angry_keller[92664]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 06 09:41:40 compute-0 systemd[1]: libpod-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope: Deactivated successfully.
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.943809197 +0000 UTC m=+0.652740103 container died d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8487a8ee3ad9b2c6e0b42f4f150ada8c0db349fb0beeb5725c65f96e42ce4a99-merged.mount: Deactivated successfully.
Dec 06 09:41:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 09:41:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:40 compute-0 podman[92649]: 2025-12-06 09:41:40.990704039 +0000 UTC m=+0.699634945 container remove d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c (image=quay.io/ceph/ceph:v19, name=angry_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:41:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:41 compute-0 sudo[92646]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:41 compute-0 systemd[1]: libpod-conmon-d6f07bec683c0bb2e6e6c208b310b402d626a8141e040baa1d7b9da23b602b4c.scope: Deactivated successfully.
Dec 06 09:41:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 06 09:41:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:41 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec 06 09:41:41 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec 06 09:41:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1032166629' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 06 09:41:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:42 compute-0 sudo[92848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbylwiqrcsrktacrgkylsxyupznbdvkc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0664124-37446-62494592332673/async_wrapper.py j901279167954 30 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0664124-37446-62494592332673/AnsiballZ_command.py _'
Dec 06 09:41:42 compute-0 sudo[92848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:42 compute-0 ansible-async_wrapper.py[92850]: Invoked with j901279167954 30 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0664124-37446-62494592332673/AnsiballZ_command.py _
Dec 06 09:41:42 compute-0 ansible-async_wrapper.py[92854]: Starting module and watcher
Dec 06 09:41:42 compute-0 ansible-async_wrapper.py[92854]: Start watching 92855 (30)
Dec 06 09:41:42 compute-0 ansible-async_wrapper.py[92855]: Start module (92855)
Dec 06 09:41:42 compute-0 ansible-async_wrapper.py[92850]: Return async_wrapper task started.
Dec 06 09:41:42 compute-0 sudo[92848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 09:41:43 compute-0 ceph-mon[74327]: pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 09:41:43 compute-0 ceph-mon[74327]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 06 09:41:43 compute-0 python3[92856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.144867278 +0000 UTC m=+0.059905554 container create f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:43 compute-0 systemd[1]: Started libpod-conmon-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope.
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.119986942 +0000 UTC m=+0.035025258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.239751098 +0000 UTC m=+0.154789384 container init f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.247255755 +0000 UTC m=+0.162294021 container start f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.251412156 +0000 UTC m=+0.166450422 container attach f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:43 compute-0 great_mendeleev[92872]: 
Dec 06 09:41:43 compute-0 great_mendeleev[92872]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.635757325 +0000 UTC m=+0.550795621 container died f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:41:43 compute-0 systemd[1]: libpod-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope: Deactivated successfully.
Dec 06 09:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b316994d80ec05135bdf36b2151c6640fca4a4e58290e000e3f30d526b330d-merged.mount: Deactivated successfully.
Dec 06 09:41:43 compute-0 podman[92857]: 2025-12-06 09:41:43.682027718 +0000 UTC m=+0.597066024 container remove f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e (image=quay.io/ceph/ceph:v19, name=great_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:41:43 compute-0 systemd[1]: libpod-conmon-f9d5d2eeabdf06da1a96a49d9b899c216fa97f047ef9ee04cc40cb94dd0e0f3e.scope: Deactivated successfully.
Dec 06 09:41:43 compute-0 ansible-async_wrapper.py[92855]: Module complete (92855)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:43 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3))
Dec 06 09:41:43 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 718093b7-ae24-4ca4-868b-ad896e0c544f (Updating node-exporter deployment (+3 -> 3)) in 8 seconds
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:43 compute-0 sudo[92909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:43 compute-0 sudo[92909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:43 compute-0 sudo[92909]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:43 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 11 completed events
Dec 06 09:41:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:41:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 sudo[92953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:41:44 compute-0 sudo[92953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:44 compute-0 sudo[93005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmxwsukkojpqscqyfaewnrvvlhlzswk ; /usr/bin/python3'
Dec 06 09:41:44 compute-0 sudo[93005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:44 compute-0 ceph-mon[74327]: pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:44 compute-0 python3[93007]: ansible-ansible.legacy.async_status Invoked with jid=j901279167954.92850 mode=status _async_dir=/root/.ansible_async
Dec 06 09:41:44 compute-0 sudo[93005]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:44 compute-0 sudo[93079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslgeoxhllpjxfuucymgekjinbzjhjqv ; /usr/bin/python3'
Dec 06 09:41:44 compute-0 sudo[93079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.433407437 +0000 UTC m=+0.046433098 container create 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 09:41:44 compute-0 systemd[1]: Started libpod-conmon-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope.
Dec 06 09:41:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:44 compute-0 python3[93087]: ansible-ansible.legacy.async_status Invoked with jid=j901279167954.92850 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.411412672 +0000 UTC m=+0.024438333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.507245472 +0000 UTC m=+0.120271143 container init 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:44 compute-0 sudo[93079]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.517032471 +0000 UTC m=+0.130058132 container start 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.520960145 +0000 UTC m=+0.133985796 container attach 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:41:44 compute-0 eloquent_sutherland[93111]: 167 167
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.523114553 +0000 UTC m=+0.136140224 container died 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:41:44 compute-0 systemd[1]: libpod-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope: Deactivated successfully.
Dec 06 09:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8269a7b2b5d7079e5c988dc19efeed900be050d8eb8d864b089ca104e866929-merged.mount: Deactivated successfully.
Dec 06 09:41:44 compute-0 podman[93095]: 2025-12-06 09:41:44.566605168 +0000 UTC m=+0.179630839 container remove 46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_sutherland, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:44 compute-0 systemd[1]: libpod-conmon-46fab7013354eeb114aa62a694999d310980eb50d56a2a1d620ac32e3a3d098f.scope: Deactivated successfully.
Dec 06 09:41:44 compute-0 podman[93136]: 2025-12-06 09:41:44.716045301 +0000 UTC m=+0.041143381 container create a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 09:41:44 compute-0 systemd[1]: Started libpod-conmon-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope.
Dec 06 09:41:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:44 compute-0 podman[93136]: 2025-12-06 09:41:44.697131053 +0000 UTC m=+0.022229173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:44 compute-0 podman[93136]: 2025-12-06 09:41:44.807911575 +0000 UTC m=+0.133009655 container init a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:41:44 compute-0 podman[93136]: 2025-12-06 09:41:44.814204924 +0000 UTC m=+0.139303014 container start a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:41:44 compute-0 podman[93136]: 2025-12-06 09:41:44.819347327 +0000 UTC m=+0.144445417 container attach a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:41:44 compute-0 sudo[93181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmwjpdvoglnkltawliulynsmbxruack ; /usr/bin/python3'
Dec 06 09:41:44 compute-0 sudo[93181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:45 compute-0 python3[93183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:45 compute-0 crazy_lehmann[93153]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:41:45 compute-0 crazy_lehmann[93153]: --> All data devices are unavailable
Dec 06 09:41:45 compute-0 systemd[1]: libpod-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope: Deactivated successfully.
Dec 06 09:41:45 compute-0 podman[93136]: 2025-12-06 09:41:45.24781908 +0000 UTC m=+0.572917200 container died a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.266344936 +0000 UTC m=+0.079252187 container create 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:41:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:45 compute-0 systemd[1]: Started libpod-conmon-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope.
Dec 06 09:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-651a38bf7dd1e882b770b9d478879242b6d26bc484f0b9c089d51080186f7c83-merged.mount: Deactivated successfully.
Dec 06 09:41:45 compute-0 podman[93136]: 2025-12-06 09:41:45.314512888 +0000 UTC m=+0.639611008 container remove a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.234636663 +0000 UTC m=+0.047543974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:45 compute-0 systemd[1]: libpod-conmon-a761253c90c45d31f6f220e17783c6cc42018ad8f4481be9d630de82a24e482e.scope: Deactivated successfully.
Dec 06 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.353999236 +0000 UTC m=+0.166906467 container init 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:45 compute-0 sudo[92953]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.362880137 +0000 UTC m=+0.175787348 container start 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.36616793 +0000 UTC m=+0.179075161 container attach 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:45 compute-0 sudo[93225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:45 compute-0 sudo[93225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:45 compute-0 sudo[93225]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:45 compute-0 sudo[93250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:41:45 compute-0 sudo[93250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:45 compute-0 strange_driscoll[93221]: 
Dec 06 09:41:45 compute-0 strange_driscoll[93221]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.747905407 +0000 UTC m=+0.560812638 container died 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:45 compute-0 systemd[1]: libpod-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope: Deactivated successfully.
Dec 06 09:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9ca6e139fa24d20d0b3da5cba9ecf5d2acf78da76ce88dc44d02f4bab32941-merged.mount: Deactivated successfully.
Dec 06 09:41:45 compute-0 podman[93193]: 2025-12-06 09:41:45.830541939 +0000 UTC m=+0.643449200 container remove 96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60 (image=quay.io/ceph/ceph:v19, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:41:45 compute-0 systemd[1]: libpod-conmon-96e7b80aec0c3be5abda8449b7c14bb4f8435efbe518e571509e2429522f9f60.scope: Deactivated successfully.
Dec 06 09:41:45 compute-0 sudo[93181]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:45 compute-0 podman[93349]: 2025-12-06 09:41:45.999552601 +0000 UTC m=+0.063711194 container create c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:46 compute-0 systemd[1]: Started libpod-conmon-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope.
Dec 06 09:41:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:45.978220197 +0000 UTC m=+0.042378830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:46.082649918 +0000 UTC m=+0.146808601 container init c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:46.091716014 +0000 UTC m=+0.155874637 container start c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:46.095448433 +0000 UTC m=+0.159607056 container attach c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:46 compute-0 suspicious_elgamal[93365]: 167 167
Dec 06 09:41:46 compute-0 systemd[1]: libpod-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope: Deactivated successfully.
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:46.099761848 +0000 UTC m=+0.163920471 container died c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-12c5ede26755ddfacf199d4f4e403d6f4b35837d17982fccba6e5d0c37d239d1-merged.mount: Deactivated successfully.
Dec 06 09:41:46 compute-0 podman[93349]: 2025-12-06 09:41:46.146644871 +0000 UTC m=+0.210803474 container remove c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:46 compute-0 systemd[1]: libpod-conmon-c8f5403724eee4f1d147729c7549c6b3ad539387795a1973616b7f8367443603.scope: Deactivated successfully.
Dec 06 09:41:46 compute-0 ceph-mon[74327]: pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:46 compute-0 ceph-mon[74327]: from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.358262669 +0000 UTC m=+0.050404884 container create 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:46 compute-0 systemd[1]: Started libpod-conmon-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope.
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.333929591 +0000 UTC m=+0.026071806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.462690741 +0000 UTC m=+0.154832986 container init 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.472262413 +0000 UTC m=+0.164404618 container start 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.475758633 +0000 UTC m=+0.167900848 container attach 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:46 compute-0 sudo[93432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmaedcjuokdhlyhzvwuwfxisdakdrcql ; /usr/bin/python3'
Dec 06 09:41:46 compute-0 sudo[93432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]: {
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:     "1": [
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:         {
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "devices": [
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "/dev/loop3"
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             ],
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "lv_name": "ceph_lv0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "lv_size": "21470642176",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "name": "ceph_lv0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "tags": {
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.cluster_name": "ceph",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.crush_device_class": "",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.encrypted": "0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.osd_id": "1",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.type": "block",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.vdo": "0",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:                 "ceph.with_tpm": "0"
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             },
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "type": "block",
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:             "vg_name": "ceph_vg0"
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:         }
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]:     ]
Dec 06 09:41:46 compute-0 vigilant_ptolemy[93404]: }
Dec 06 09:41:46 compute-0 systemd[1]: libpod-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope: Deactivated successfully.
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.755765804 +0000 UTC m=+0.447908089 container died 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:46 compute-0 python3[93434]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96caad3c9968dd317a829cec4be08598ad886cf524083da3039c60e1092a204-merged.mount: Deactivated successfully.
Dec 06 09:41:46 compute-0 podman[93388]: 2025-12-06 09:41:46.822186843 +0000 UTC m=+0.514329038 container remove 25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:46 compute-0 systemd[1]: libpod-conmon-25eac422e75af831cc2bb6cff94ec9df9de72aa92c14fd222e9958db57eb4985.scope: Deactivated successfully.
Dec 06 09:41:46 compute-0 sudo[93250]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:46 compute-0 podman[93449]: 2025-12-06 09:41:46.912465477 +0000 UTC m=+0.112925220 container create a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:41:46 compute-0 podman[93449]: 2025-12-06 09:41:46.853938217 +0000 UTC m=+0.054397980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:46 compute-0 systemd[1]: Started libpod-conmon-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope.
Dec 06 09:41:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:46 compute-0 sudo[93463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:47 compute-0 sudo[93463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:47 compute-0 sudo[93463]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:47 compute-0 podman[93449]: 2025-12-06 09:41:47.007952765 +0000 UTC m=+0.208412508 container init a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:47 compute-0 podman[93449]: 2025-12-06 09:41:47.017927181 +0000 UTC m=+0.218386944 container start a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:47 compute-0 podman[93449]: 2025-12-06 09:41:47.021944057 +0000 UTC m=+0.222403820 container attach a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:41:47 compute-0 sudo[93495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:41:47 compute-0 sudo[93495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:47 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:47 compute-0 great_bohr[93487]: 
Dec 06 09:41:47 compute-0 great_bohr[93487]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec 06 09:41:47 compute-0 systemd[1]: libpod-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93449]: 2025-12-06 09:41:47.493265576 +0000 UTC m=+0.693725359 container died a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b394a236b692e5c816f5affce081b32c58f35c50aec856ee4a0981c0fe63e74-merged.mount: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93449]: 2025-12-06 09:41:47.549681449 +0000 UTC m=+0.750141232 container remove a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2 (image=quay.io/ceph/ceph:v19, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 09:41:47 compute-0 systemd[1]: libpod-conmon-a6a208f6212b2b9afc79a8f5b9c864cb9a413b5818eed2a517d9ff68fd3cead2.scope: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.579875542 +0000 UTC m=+0.061093251 container create 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:47 compute-0 sudo[93432]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:47 compute-0 systemd[1]: Started libpod-conmon-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope.
Dec 06 09:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.5604654 +0000 UTC m=+0.041683089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.65918513 +0000 UTC m=+0.140402829 container init 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.66900683 +0000 UTC m=+0.150224509 container start 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:47 compute-0 quirky_shtern[93609]: 167 167
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.672942635 +0000 UTC m=+0.154160334 container attach 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:47 compute-0 systemd[1]: libpod-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.67437783 +0000 UTC m=+0.155595499 container died 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c754b53438597553d2d807def5819f252153e98b20fad63ebed6602a0d6ebabd-merged.mount: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93580]: 2025-12-06 09:41:47.719506176 +0000 UTC m=+0.200723885 container remove 97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_shtern, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:41:47 compute-0 systemd[1]: libpod-conmon-97888e2dd7998a5938a5f600f5dc2d1e06f03f21cccb4124811e9ec927b23ed3.scope: Deactivated successfully.
Dec 06 09:41:47 compute-0 podman[93633]: 2025-12-06 09:41:47.867745852 +0000 UTC m=+0.044486808 container create eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:41:47 compute-0 ansible-async_wrapper.py[92854]: Done in kid B.
Dec 06 09:41:47 compute-0 systemd[1]: Started libpod-conmon-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope.
Dec 06 09:41:47 compute-0 podman[93633]: 2025-12-06 09:41:47.847728089 +0000 UTC m=+0.024469095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:47 compute-0 podman[93633]: 2025-12-06 09:41:47.967919068 +0000 UTC m=+0.144660084 container init eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:47 compute-0 podman[93633]: 2025-12-06 09:41:47.978864134 +0000 UTC m=+0.155605080 container start eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:41:47 compute-0 podman[93633]: 2025-12-06 09:41:47.983860922 +0000 UTC m=+0.160601968 container attach eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:41:48 compute-0 sudo[93699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqyzulojdkbiiqowbqxvedqhkxxgbiea ; /usr/bin/python3'
Dec 06 09:41:48 compute-0 sudo[93699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:48 compute-0 ceph-mon[74327]: pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:48 compute-0 ceph-mon[74327]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:48 compute-0 python3[93703]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:48 compute-0 podman[93748]: 2025-12-06 09:41:48.643761021 +0000 UTC m=+0.053025067 container create 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:41:48 compute-0 lvm[93763]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:41:48 compute-0 lvm[93763]: VG ceph_vg0 finished
Dec 06 09:41:48 compute-0 busy_colden[93649]: {}
Dec 06 09:41:48 compute-0 systemd[1]: Started libpod-conmon-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope.
Dec 06 09:41:48 compute-0 systemd[1]: libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Deactivated successfully.
Dec 06 09:41:48 compute-0 podman[93748]: 2025-12-06 09:41:48.62192041 +0000 UTC m=+0.031184506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:48 compute-0 conmon[93649]: conmon eaa07fe3830443278573 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope/container/memory.events
Dec 06 09:41:48 compute-0 systemd[1]: libpod-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Consumed 1.179s CPU time.
Dec 06 09:41:48 compute-0 podman[93633]: 2025-12-06 09:41:48.717675428 +0000 UTC m=+0.894416404 container died eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:41:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ca79c6e5413a858c07c02d44e50cda67c7fe147699e42a33e9cdb5d80ed2940-merged.mount: Deactivated successfully.
Dec 06 09:41:48 compute-0 podman[93748]: 2025-12-06 09:41:48.754542013 +0000 UTC m=+0.163806059 container init 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:41:48 compute-0 podman[93748]: 2025-12-06 09:41:48.7601454 +0000 UTC m=+0.169409446 container start 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec 06 09:41:48 compute-0 podman[93748]: 2025-12-06 09:41:48.763904678 +0000 UTC m=+0.173168724 container attach 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:48 compute-0 podman[93633]: 2025-12-06 09:41:48.769790464 +0000 UTC m=+0.946531420 container remove eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:41:48 compute-0 systemd[1]: libpod-conmon-eaa07fe38304432785733100d0941ef0989bbc50c67fa49296444c18dd30eeca.scope: Deactivated successfully.
Dec 06 09:41:48 compute-0 sudo[93495]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:48 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3))
Dec 06 09:41:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 06 09:41:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:48 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:48 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec 06 09:41:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec 06 09:41:49 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:49 compute-0 hopeful_pascal[93769]: 
Dec 06 09:41:49 compute-0 hopeful_pascal[93769]: [{"container_id": "aa22500c4f14", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2025-12-06T09:38:26.308407Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897361Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-12-06T09:38:26.201101Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-0", "version": "19.2.3"}, {"container_id": "500f8c89b5c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.41%", "created": "2025-12-06T09:39:19.960123Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961429Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-12-06T09:39:19.844753Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-1", "version": "19.2.3"}, {"container_id": "29aae73f62af", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2025-12-06T09:40:42.007499Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631523Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2025-12-06T09:40:41.208240Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@crash.compute-2", "version": "19.2.3"}, {"container_id": "815d2c9c324f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "24.20%", "created": "2025-12-06T09:37:46.726645Z", "daemon_id": "compute-0.qhdjwa", "daemon_name": "mgr.compute-0.qhdjwa", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897149Z", "memory_usage": 543686656, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-06T09:37:46.196694Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-0.qhdjwa", "version": "19.2.3"}, {"container_id": "66d946b34f90", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "49.99%", "created": "2025-12-06T09:40:29.037886Z", "daemon_id": "compute-1.sauzid", "daemon_name": "mgr.compute-1.sauzid", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961804Z", "memory_usage": 496186163, "ports": [8765], "service_name": "mgr", "started": "2025-12-06T09:40:28.901022Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-1.sauzid", "version": "19.2.3"}, {"container_id": "4821735c9154", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "35.81%", "created": "2025-12-06T09:40:21.865862Z", "daemon_id": "compute-2.oazbvn", "daemon_name": "mgr.compute-2.oazbvn", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631430Z", "memory_usage": 504574771, "ports": [8765], "service_name": "mgr", "started": "2025-12-06T09:40:21.774842Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mgr.compute-2.oazbvn", "version": "19.2.3"}, {"container_id": "484d6ed1039c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.48%", "created": "2025-12-06T09:37:41.200513Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.896943Z", "memory_request": 2147483648, "memory_usage": 61886955, "ports": [], "service_name": "mon", "started": "2025-12-06T09:37:43.583790Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-0", "version": "19.2.3"}, {"container_id": "d320de814b27", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.40%", "created": "2025-12-06T09:40:12.577566Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961697Z", "memory_request": 2147483648, "memory_usage": 45529169, "ports": [], "service_name": "mon", "started": "2025-12-06T09:40:12.471377Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-1", "version": "19.2.3"}, {"container_id": "9800312b2542", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.78%", "created": "2025-12-06T09:40:09.874483Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631326Z", "memory_request": 2147483648, "memory_usage": 49398415, "ports": [], "service_name": "mon", "started": "2025-12-06T09:40:08.953427Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@mon.compute-2", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:38.394457Z daemon:node-exporter.compute-0 [INFO] \"Deployed node-exporter.compute-0 on host 'compute-0'\""], "hostname": "compute-0",
Dec 06 09:41:49 compute-0 hopeful_pascal[93769]:  "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:41.015939Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-12-06T09:41:43.837357Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "1aa09529261e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.69%", "created": "2025-12-06T09:39:35.584331Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T09:41:30.897606Z", "memory_request": 4294967296, "memory_usage": 68933386, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:39:34.881862Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.1", "version": "19.2.3"}, {"container_id": "0f0393491dd0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.17%", "created": "2025-12-06T09:39:32.740564Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T09:41:30.961579Z", "memory_request": 5502921113, "memory_usage": 70925680, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:39:32.604587Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.0", "version": "19.2.3"}, {"container_id": "446ec9caaae7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.28%", "created": "2025-12-06T09:40:59.214000Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-06T09:41:30.631597Z", "memory_request": 4294967296, "memory_usage": 64529367, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T09:40:59.105219Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@osd.2", "version": "19.2.3"}]
Dec 06 09:41:49 compute-0 systemd[1]: libpod-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope: Deactivated successfully.
Dec 06 09:41:49 compute-0 podman[93748]: 2025-12-06 09:41:49.248616219 +0000 UTC m=+0.657880265 container died 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-389c61b318ffe409d59c8cb9ab1140c3d98e4ae3ec830093742d971cb02a636f-merged.mount: Deactivated successfully.
Dec 06 09:41:49 compute-0 podman[93748]: 2025-12-06 09:41:49.301560493 +0000 UTC m=+0.710824549 container remove 1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5 (image=quay.io/ceph/ceph:v19, name=hopeful_pascal, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:49 compute-0 systemd[1]: libpod-conmon-1ce09c652ecac27322196c81ade91fbb0ac1dbaa5393f03644cdbecaee3857e5.scope: Deactivated successfully.
Dec 06 09:41:49 compute-0 sudo[93699]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:49 compute-0 rsyslogd[1004]: message too long (8192) with configured size 8096, begin of message is: [{"container_id": "aa22500c4f14", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qizhkr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:49 compute-0 ceph-mon[74327]: pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:49 compute-0 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-2.qizhkr on compute-2
Dec 06 09:41:49 compute-0 ceph-mon[74327]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 09:41:50 compute-0 sudo[93840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqdedyeheqrabayetxfwcrddjjsluzgb ; /usr/bin/python3'
Dec 06 09:41:50 compute-0 sudo[93840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:50 compute-0 python3[93842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:50 compute-0 podman[93843]: 2025-12-06 09:41:50.438708566 +0000 UTC m=+0.068360882 container create c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:41:50 compute-0 podman[93843]: 2025-12-06 09:41:50.406665283 +0000 UTC m=+0.036317669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:50 compute-0 systemd[1]: Started libpod-conmon-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope.
Dec 06 09:41:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:50 compute-0 podman[93843]: 2025-12-06 09:41:50.559896516 +0000 UTC m=+0.189548912 container init c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:41:50 compute-0 podman[93843]: 2025-12-06 09:41:50.567956661 +0000 UTC m=+0.197609007 container start c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:50 compute-0 podman[93843]: 2025-12-06 09:41:50.572124293 +0000 UTC m=+0.201776629 container attach c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:50 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:50 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec 06 09:41:50 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec 06 09:41:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 06 09:41:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506900584' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:41:51 compute-0 sharp_beaver[93858]: 
Dec 06 09:41:51 compute-0 sharp_beaver[93858]: {"fsid":"5ecd3f74-dade-5fc4-92ce-8950ae424258","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":85,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1765014071,"num_in_osds":3,"osd_in_since":1765014049,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84246528,"bytes_avail":64327680000,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-06T09:41:29:967825+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-12-06T09:40:50.551863+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.sauzid":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oazbvn":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b1a841d8-e71a-43d3-ad28-5a44e75485bf":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 06 09:41:51 compute-0 systemd[1]: libpod-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope: Deactivated successfully.
Dec 06 09:41:51 compute-0 podman[93843]: 2025-12-06 09:41:51.053463897 +0000 UTC m=+0.683116243 container died c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b6da106f69b92129fdeb0d774d400de61dbe4c21747582cf5b5048cc0d80e6-merged.mount: Deactivated successfully.
Dec 06 09:41:51 compute-0 podman[93843]: 2025-12-06 09:41:51.112566516 +0000 UTC m=+0.742218862 container remove c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462 (image=quay.io/ceph/ceph:v19, name=sharp_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 09:41:51 compute-0 systemd[1]: libpod-conmon-c72f1264af97f6674db2cda084ccb62199e7d7dba7def8dc96707f0968034462.scope: Deactivated successfully.
Dec 06 09:41:51 compute-0 sudo[93840]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 06 09:41:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 06 09:41:51 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 06 09:41:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 06 09:41:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.oqhsdh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:51 compute-0 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-1.oqhsdh on compute-1
Dec 06 09:41:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2506900584' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 09:41:51 compute-0 sudo[93920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctpzvxrjmtxupwetvebezkrnasnovhdc ; /usr/bin/python3'
Dec 06 09:41:51 compute-0 sudo[93920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:52 compute-0 python3[93922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.179165359 +0000 UTC m=+0.063183318 container create 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 06 09:41:52 compute-0 systemd[1]: Started libpod-conmon-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope.
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.149187562 +0000 UTC m=+0.033205591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.289448615 +0000 UTC m=+0.173466614 container init 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.297759148 +0000 UTC m=+0.181777107 container start 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.301369592 +0000 UTC m=+0.185387561 container attach 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec 06 09:41:52 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 06 09:41:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 06 09:41:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220877648' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:41:52 compute-0 nice_johnson[93938]: 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: pgmap v18: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:52 compute-0 ceph-mon[74327]: osdmap e43: 3 total, 3 up, 3 in
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3027759423' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zktslo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:52 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 06 09:41:52 compute-0 ceph-mon[74327]: osdmap e44: 3 total, 3 up, 3 in
Dec 06 09:41:52 compute-0 systemd[1]: libpod-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope: Deactivated successfully.
Dec 06 09:41:52 compute-0 nice_johnson[93938]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.qhdjwa/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.sauzid/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.oazbvn/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502921113","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zktslo","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.oqhsdh","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.qizhkr","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.728269895 +0000 UTC m=+0.612287874 container died 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:52 compute-0 sudo[93961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:52 compute-0 sudo[93961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:52 compute-0 sudo[93961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f74762e91d63a927082935ea18b2985efbf631256528391147f248ed5108898-merged.mount: Deactivated successfully.
Dec 06 09:41:52 compute-0 podman[93923]: 2025-12-06 09:41:52.769148558 +0000 UTC m=+0.653166537 container remove 0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f (image=quay.io/ceph/ceph:v19, name=nice_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:41:52 compute-0 systemd[1]: libpod-conmon-0e6954b0c58bb59e20bc0e85c8197af98d6234f5a6f728bb865efe92e45eba0f.scope: Deactivated successfully.
Dec 06 09:41:52 compute-0 sudo[93920]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:52 compute-0 sudo[93995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:52 compute-0 sudo[93995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v21: 133 pgs: 1 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.21562217 +0000 UTC m=+0.051304393 container create 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:53 compute-0 systemd[1]: Started libpod-conmon-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope.
Dec 06 09:41:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.198939022 +0000 UTC m=+0.034621275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.306363279 +0000 UTC m=+0.142045522 container init 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.318261454 +0000 UTC m=+0.153943717 container start 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.322812969 +0000 UTC m=+0.158495222 container attach 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:41:53 compute-0 eager_swartz[94085]: 167 167
Dec 06 09:41:53 compute-0 systemd[1]: libpod-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope: Deactivated successfully.
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.32698037 +0000 UTC m=+0.162662673 container died 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0623897aff7406f3c0d3c7d8f301cfb511ba108babb159bcb53e2cef3c846b-merged.mount: Deactivated successfully.
Dec 06 09:41:53 compute-0 podman[94068]: 2025-12-06 09:41:53.378969734 +0000 UTC m=+0.214651977 container remove 46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_swartz, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:53 compute-0 systemd[1]: libpod-conmon-46905ce3cb21bee24715e4991279fcb947287514318bc71b0aa7540f96f3a1c2.scope: Deactivated successfully.
Dec 06 09:41:53 compute-0 systemd[1]: Reloading.
Dec 06 09:41:53 compute-0 systemd-rc-local-generator[94131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:53 compute-0 systemd-sysv-generator[94136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:53 compute-0 sudo[94163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwbhhatrzzyculdobqhuukcbpypbxyl ; /usr/bin/python3'
Dec 06 09:41:53 compute-0 sudo[94163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 06 09:41:53 compute-0 ceph-mon[74327]: Deploying daemon rgw.rgw.compute-0.zktslo on compute-0
Dec 06 09:41:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1220877648' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 09:41:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 06 09:41:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 06 09:41:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:41:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 06 09:41:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 06 09:41:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:53 compute-0 systemd[1]: Reloading.
Dec 06 09:41:53 compute-0 systemd-sysv-generator[94199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:53 compute-0 systemd-rc-local-generator[94195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:53 compute-0 python3[94167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:53 compute-0 podman[94205]: 2025-12-06 09:41:53.936344232 +0000 UTC m=+0.044081105 container create dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec 06 09:41:54 compute-0 podman[94205]: 2025-12-06 09:41:53.919160507 +0000 UTC m=+0.026897400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:54 compute-0 systemd[1]: Started libpod-conmon-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope.
Dec 06 09:41:54 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.zktslo for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:41:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 podman[94205]: 2025-12-06 09:41:54.096892716 +0000 UTC m=+0.204629609 container init dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:41:54 compute-0 podman[94205]: 2025-12-06 09:41:54.107948655 +0000 UTC m=+0.215685538 container start dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:54 compute-0 podman[94205]: 2025-12-06 09:41:54.111731985 +0000 UTC m=+0.219468878 container attach dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec 06 09:41:54 compute-0 podman[94288]: 2025-12-06 09:41:54.34410557 +0000 UTC m=+0.060390770 container create cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18200210550dac29a943ea61688b85dcf2ec2ac002e352616922dae8472389d/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zktslo supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:54 compute-0 podman[94288]: 2025-12-06 09:41:54.311408276 +0000 UTC m=+0.027693476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:54 compute-0 podman[94288]: 2025-12-06 09:41:54.41718979 +0000 UTC m=+0.133474950 container init cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:41:54 compute-0 podman[94288]: 2025-12-06 09:41:54.430035366 +0000 UTC m=+0.146320526 container start cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:41:54 compute-0 bash[94288]: cd7b3967b1bb2f029aa8c00ef30e195138373f6bcb66e3b5e086c9bb835b3595
Dec 06 09:41:54 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.zktslo for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702722184' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 06 09:41:54 compute-0 priceless_franklin[94222]: mimic
Dec 06 09:41:54 compute-0 sudo[93995]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:54 compute-0 systemd[1]: libpod-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope: Deactivated successfully.
Dec 06 09:41:54 compute-0 radosgw[94308]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:41:54 compute-0 radosgw[94308]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 06 09:41:54 compute-0 radosgw[94308]: framework: beast
Dec 06 09:41:54 compute-0 radosgw[94308]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 06 09:41:54 compute-0 radosgw[94308]: init_numa not setting numa affinity
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3))
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event b1a841d8-e71a-43d3-ad28-5a44e75485bf (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 06 09:41:54 compute-0 podman[94330]: 2025-12-06 09:41:54.560319144 +0000 UTC m=+0.036012760 container died dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3))
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-39486e697181db7e753b1bcfd97b5d4957667be1f4386fcd828e84ba9f1d4042-merged.mount: Deactivated successfully.
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec 06 09:41:54 compute-0 podman[94330]: 2025-12-06 09:41:54.603877521 +0000 UTC m=+0.079571137 container remove dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5 (image=quay.io/ceph/ceph:v19, name=priceless_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:54 compute-0 systemd[1]: libpod-conmon-dffbd2f038672180d7de3d54a0f92ba106a2496e4306df2ad4ce3d2006836fa5.scope: Deactivated successfully.
Dec 06 09:41:54 compute-0 sudo[94163]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 06 09:41:54 compute-0 ceph-mon[74327]: pgmap v21: 133 pgs: 1 unknown, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:54 compute-0 ceph-mon[74327]: osdmap e45: 3 total, 3 up, 3 in
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/702722184' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.czucwy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:54 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 09:41:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 06 09:41:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 06 09:41:54 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:41:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 134 pgs: 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:41:55 compute-0 sudo[94942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivaqnakmsrtiaxmbmzcqvjsuroruuyxr ; /usr/bin/python3'
Dec 06 09:41:55 compute-0 sudo[94942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:55 compute-0 python3[94944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:55 compute-0 podman[94945]: 2025-12-06 09:41:55.694837984 +0000 UTC m=+0.051202339 container create 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:55 compute-0 systemd[1]: Started libpod-conmon-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope.
Dec 06 09:41:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 06 09:41:55 compute-0 podman[94945]: 2025-12-06 09:41:55.676049901 +0000 UTC m=+0.032414266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:41:55 compute-0 podman[94945]: 2025-12-06 09:41:55.778514999 +0000 UTC m=+0.134879374 container init 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 06 09:41:55 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 06 09:41:55 compute-0 podman[94945]: 2025-12-06 09:41:55.78739103 +0000 UTC m=+0.143755385 container start 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:55 compute-0 podman[94945]: 2025-12-06 09:41:55.791410757 +0000 UTC m=+0.147775132 container attach 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 09:41:55 compute-0 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-2.czucwy on compute-2
Dec 06 09:41:55 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 09:41:55 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 09:41:55 compute-0 ceph-mon[74327]: osdmap e46: 3 total, 3 up, 3 in
Dec 06 09:41:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 06 09:41:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 agitated_mirzakhani[94960]: 
Dec 06 09:41:56 compute-0 agitated_mirzakhani[94960]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Dec 06 09:41:56 compute-0 systemd[1]: libpod-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope: Deactivated successfully.
Dec 06 09:41:56 compute-0 podman[94945]: 2025-12-06 09:41:56.251892732 +0000 UTC m=+0.608257087 container died 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 09:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c35ce07fa37af7d2f4a0cc093a2ee278c7fcf237bead2739d303d3cdce3e15-merged.mount: Deactivated successfully.
Dec 06 09:41:56 compute-0 podman[94945]: 2025-12-06 09:41:56.296010137 +0000 UTC m=+0.652374522 container remove 7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5 (image=quay.io/ceph/ceph:v19, name=agitated_mirzakhani, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:56 compute-0 systemd[1]: libpod-conmon-7ddff261b96667bfbd29c48acc0764ef3ab3a4aaa8ae1417c1038d06ed7b32d5.scope: Deactivated successfully.
Dec 06 09:41:56 compute-0 sudo[94942]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec 06 09:41:56 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec 06 09:41:56 compute-0 sudo[94996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:56 compute-0 sudo[94996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:56 compute-0 sudo[94996]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:56 compute-0 sudo[95021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:41:56 compute-0 sudo[95021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 06 09:41:56 compute-0 ceph-mon[74327]: pgmap v24: 134 pgs: 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 06 09:41:56 compute-0 ceph-mon[74327]: osdmap e47: 3 total, 3 up, 3 in
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/607080093' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ujokui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-0.ujokui on compute-0
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 09:41:56 compute-0 ceph-mon[74327]: osdmap e48: 3 total, 3 up, 3 in
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 new map
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-06T09:41:56:804272+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:29.967778+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.czucwy{-1:24274} state up:standby seq 1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:boot
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] as mds.0
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.czucwy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"} v 0)
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e3 all = 0
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e4 new map
Dec 06 09:41:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-06T09:41:56:835698+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:56.835690+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:creating seq 1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:creating}
Dec 06 09:41:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.czucwy is now active in filesystem cephfs as rank 0
Dec 06 09:41:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 135 pgs: 1 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.043710291 +0000 UTC m=+0.043034822 container create c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:57 compute-0 systemd[1]: Started libpod-conmon-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope.
Dec 06 09:41:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.026151345 +0000 UTC m=+0.025475916 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.136317028 +0000 UTC m=+0.135641609 container init c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.14809963 +0000 UTC m=+0.147424181 container start c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.151630151 +0000 UTC m=+0.150954722 container attach c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:41:57 compute-0 xenodochial_bartik[95109]: 167 167
Dec 06 09:41:57 compute-0 systemd[1]: libpod-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope: Deactivated successfully.
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.155566296 +0000 UTC m=+0.154890867 container died c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-42e985aa3c60d1b258a21f70173d5e202c9e7822cd17a389b3f18401030939c9-merged.mount: Deactivated successfully.
Dec 06 09:41:57 compute-0 podman[95091]: 2025-12-06 09:41:57.204262885 +0000 UTC m=+0.203587426 container remove c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:41:57 compute-0 systemd[1]: libpod-conmon-c50990d7ee56093081df38c11a451cd30fd179669b3d77cbe1a0e8ba7eee57de.scope: Deactivated successfully.
Dec 06 09:41:57 compute-0 systemd[1]: Reloading.
Dec 06 09:41:57 compute-0 systemd-rc-local-generator[95152]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:57 compute-0 systemd-sysv-generator[95156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:57 compute-0 systemd[1]: Reloading.
Dec 06 09:41:57 compute-0 systemd-rc-local-generator[95195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:57 compute-0 systemd-sysv-generator[95199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:boot
Dec 06 09:41:57 compute-0 ceph-mon[74327]: daemon mds.cephfs.compute-2.czucwy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 06 09:41:57 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:41:57 compute-0 ceph-mon[74327]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 06 09:41:57 compute-0 ceph-mon[74327]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 06 09:41:57 compute-0 ceph-mon[74327]: Cluster is now healthy
Dec 06 09:41:57 compute-0 ceph-mon[74327]: fsmap cephfs:0 1 up:standby
Dec 06 09:41:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec 06 09:41:57 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:creating}
Dec 06 09:41:57 compute-0 ceph-mon[74327]: daemon mds.cephfs.compute-2.czucwy is now active in filesystem cephfs as rank 0
Dec 06 09:41:57 compute-0 ceph-mon[74327]: pgmap v27: 135 pgs: 1 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 06 09:41:57 compute-0 ceph-mon[74327]: osdmap e49: 3 total, 3 up, 3 in
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e5 new map
Dec 06 09:41:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-06T09:41:57:856282+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:57.856277+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec 06 09:41:57 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active}
Dec 06 09:41:57 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ujokui for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:41:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:41:58 compute-0 podman[95252]: 2025-12-06 09:41:58.286215024 +0000 UTC m=+0.057058174 container create 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20aaacd9fbc52ba455391e1c964509042df65092e3cbfe559b2a43f528052014/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ujokui supports timestamps until 2038 (0x7fffffff)
Dec 06 09:41:58 compute-0 podman[95252]: 2025-12-06 09:41:58.263140355 +0000 UTC m=+0.033983505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:41:58 compute-0 podman[95252]: 2025-12-06 09:41:58.36140247 +0000 UTC m=+0.132245621 container init 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:41:58 compute-0 podman[95252]: 2025-12-06 09:41:58.371391727 +0000 UTC m=+0.142234877 container start 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:41:58 compute-0 bash[95252]: 015a304559ae181e1b0642f0ff1f7e69af56fbc7a58f131509cd368a144f8717
Dec 06 09:41:58 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ujokui for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:41:58 compute-0 ceph-mds[95272]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:41:58 compute-0 ceph-mds[95272]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 06 09:41:58 compute-0 ceph-mds[95272]: main not setting numa affinity
Dec 06 09:41:58 compute-0 ceph-mds[95272]: pidfile_write: ignore empty --pid-file
Dec 06 09:41:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mds-cephfs-compute-0-ujokui[95268]: starting mds.cephfs.compute-0.ujokui at 
Dec 06 09:41:58 compute-0 sudo[95021]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:58 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 5 from mon.0
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec 06 09:41:58 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec 06 09:41:58 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active}
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fpvjgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: Deploying daemon mds.cephfs.compute-1.fpvjgb on compute-1
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 09:41:58 compute-0 ceph-mon[74327]: osdmap e50: 3 total, 3 up, 3 in
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 new map
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-12-06T09:41:58:872230+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:57.856277+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:41:58 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 6 from mon.0
Dec 06 09:41:58 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Monitors have assigned me to become a standby
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:boot
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"} v 0)
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e6 all = 0
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e7 new map
Dec 06 09:41:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-12-06T09:41:58:889029+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:41:57.856277+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 2 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:41:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec 06 09:41:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v30: 136 pgs: 2 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 12 completed events
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:41:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:41:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:41:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:41:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 06 09:41:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 06 09:41:59 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4120731466' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/827372016' entity='client.rgw.rgw.compute-2.qizhkr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:boot
Dec 06 09:41:59 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec 06 09:41:59 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 1 up:standby
Dec 06 09:41:59 compute-0 ceph-mon[74327]: pgmap v30: 136 pgs: 2 unknown, 134 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1940551259' entity='client.rgw.rgw.compute-0.zktslo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-2.qizhkr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: from='client.? ' entity='client.rgw.rgw.compute-1.oqhsdh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 09:41:59 compute-0 ceph-mon[74327]: osdmap e51: 3 total, 3 up, 3 in
Dec 06 09:42:00 compute-0 radosgw[94308]: v1 topic migration: starting v1 topic migration..
Dec 06 09:42:00 compute-0 radosgw[94308]: LDAP not started since no server URIs were provided in the configuration.
Dec 06 09:42:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-rgw-rgw-compute-0-zktslo[94304]: 2025-12-06T09:42:00.101+0000 7f551790e980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 06 09:42:00 compute-0 radosgw[94308]: v1 topic migration: finished v1 topic migration
Dec 06 09:42:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 06 09:42:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec 06 09:42:00 compute-0 radosgw[94308]: framework: beast
Dec 06 09:42:00 compute-0 radosgw[94308]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 06 09:42:00 compute-0 radosgw[94308]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 06 09:42:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec 06 09:42:00 compute-0 radosgw[94308]: starting handler: beast
Dec 06 09:42:00 compute-0 radosgw[94308]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 09:42:00 compute-0 radosgw[94308]: mgrc service_daemon_register rgw.14532 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zktslo,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d81f60a3-cfd4-40b3-a809-ad3aae1b1fd0,zone_name=default,zonegroup_id=75773215-ab74-4afd-a4c0-f777a01e4a1a,zonegroup_name=default}
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3))
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 0e6a1a60-47ae-48b4-a96e-88b7fa58d89d (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 new map
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-12-06T09:42:00:908587+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:42:00.880325+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:boot
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"} v 0)
Dec 06 09:42:00 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec 06 09:42:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e8 all = 0
Dec 06 09:42:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Dec 06 09:42:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 06 09:42:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec 06 09:42:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:42:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec 06 09:42:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec 06 09:42:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:boot
Dec 06 09:42:01 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] up:active
Dec 06 09:42:01 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.djsnbu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:01 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 06 09:42:02 compute-0 ceph-mon[74327]: pgmap v32: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.0.0.compute-1.djsnbu-rgw
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Bind address in nfs.cephfs.0.0.compute-1.djsnbu's ganesha conf is defaulting to empty
Dec 06 09:42:02 compute-0 ceph-mon[74327]: Deploying daemon nfs.cephfs.0.0.compute-1.djsnbu on compute-1
Dec 06 09:42:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e9 new map
Dec 06 09:42:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-12-06T09:42:02:933823+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:42:00.880325+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:42:02 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Updating MDS map to version 9 from mon.0
Dec 06 09:42:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:standby
Dec 06 09:42:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 5.2 KiB/s wr, 20 op/s
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec 06 09:42:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:03 compute-0 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 06 09:42:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:03 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:03 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] up:standby
Dec 06 09:42:03 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:03 compute-0 ceph-mon[74327]: pgmap v33: 136 pgs: 136 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 5.2 KiB/s wr, 20 op/s
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:03 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:03 compute-0 ceph-mon[74327]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:03 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:04 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 13 completed events
Dec 06 09:42:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:04 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 4852788a-1dca-45d5-abe3-a4fe57183f8b (Global Recovery Event) in 10 seconds
Dec 06 09:42:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 8.0 KiB/s wr, 343 op/s
Dec 06 09:42:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 new map
Dec 06 09:42:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-12-06T09:42:05:044345+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T09:41:29.967778+0000
                                           modified        2025-12-06T09:42:00.880325+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24274}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24274 members: 24274
                                           [mds.cephfs.compute-2.czucwy{0:24274} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1500676117,v1:192.168.122.102:6805/1500676117] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ujokui{-1:14544} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2465826838,v1:192.168.122.100:6807/2465826838] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fpvjgb{-1:24215} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] compat {c=[1],r=[1],i=[1fff]}]
Dec 06 09:42:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:standby
Dec 06 09:42:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:06 compute-0 ceph-mon[74327]: pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 8.0 KiB/s wr, 343 op/s
Dec 06 09:42:06 compute-0 ceph-mon[74327]: mds.? [v2:192.168.122.101:6804/2619956440,v1:192.168.122.101:6805/2619956440] up:standby
Dec 06 09:42:06 compute-0 ceph-mon[74327]: fsmap cephfs:1 {0=cephfs.compute-2.czucwy=up:active} 2 up:standby
Dec 06 09:42:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 06 09:42:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec 06 09:42:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:42:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec 06 09:42:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec 06 09:42:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 7.0 KiB/s wr, 301 op/s
Dec 06 09:42:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:07 compute-0 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:07 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.1.0.compute-2.sseuqb-rgw
Dec 06 09:42:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.sseuqb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:07 compute-0 ceph-mon[74327]: Bind address in nfs.cephfs.1.0.compute-2.sseuqb's ganesha conf is defaulting to empty
Dec 06 09:42:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:07 compute-0 ceph-mon[74327]: Deploying daemon nfs.cephfs.1.0.compute-2.sseuqb on compute-2
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 7.0 KiB/s wr, 301 op/s
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 06 09:42:08 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec 06 09:42:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:42:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec 06 09:42:08 compute-0 sudo[95434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:08 compute-0 sudo[95434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:08 compute-0 sudo[95434]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:08 compute-0 sudo[95459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:08 compute-0 sudo[95459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 5.7 KiB/s wr, 245 op/s
Dec 06 09:42:09 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 14 completed events
Dec 06 09:42:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.169338658 +0000 UTC m=+0.047063863 container create 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:42:09 compute-0 systemd[1]: Started libpod-conmon-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope.
Dec 06 09:42:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.151578162 +0000 UTC m=+0.029303387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.256970577 +0000 UTC m=+0.134695822 container init 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.269659388 +0000 UTC m=+0.147384593 container start 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.27385025 +0000 UTC m=+0.151575565 container attach 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 09:42:09 compute-0 busy_payne[95541]: 167 167
Dec 06 09:42:09 compute-0 systemd[1]: libpod-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope: Deactivated successfully.
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.277089487 +0000 UTC m=+0.154814702 container died 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-80de7da242f5161fc7924c7f28ab500e85b39c14727e8901fe8247af94671aa9-merged.mount: Deactivated successfully.
Dec 06 09:42:09 compute-0 podman[95524]: 2025-12-06 09:42:09.322115005 +0000 UTC m=+0.199840200 container remove 009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:42:09 compute-0 systemd[1]: libpod-conmon-009a5cb039a0cf6de21f6b6f9afe35739d1944fe137e8aea7c4e975ffdd97d7d.scope: Deactivated successfully.
Dec 06 09:42:09 compute-0 systemd[1]: Reloading.
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Rados config object exists: conf-nfs.cephfs
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Creating key for client.nfs.cephfs.2.0.compute-0.dfwxck-rgw
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.dfwxck-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Bind address in nfs.cephfs.2.0.compute-0.dfwxck's ganesha conf is defaulting to empty
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:42:09 compute-0 ceph-mon[74327]: Deploying daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0
Dec 06 09:42:09 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:09 compute-0 systemd-rc-local-generator[95582]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:09 compute-0 systemd-sysv-generator[95587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:09 compute-0 systemd[1]: Reloading.
Dec 06 09:42:09 compute-0 systemd-sysv-generator[95630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:09 compute-0 systemd-rc-local-generator[95624]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:09 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:10 compute-0 podman[95682]: 2025-12-06 09:42:10.295149655 +0000 UTC m=+0.048161922 container create f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:10 compute-0 podman[95682]: 2025-12-06 09:42:10.275054616 +0000 UTC m=+0.028066863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:42:10 compute-0 podman[95682]: 2025-12-06 09:42:10.388316513 +0000 UTC m=+0.141328820 container init f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:42:10 compute-0 podman[95682]: 2025-12-06 09:42:10.398844005 +0000 UTC m=+0.151856272 container start f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:42:10 compute-0 bash[95682]: f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:42:10 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:42:10 compute-0 sudo[95459]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:42:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:42:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3))
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 9a6db2cb-2f95-4ec5-a56e-3692847dbc20 (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:42:10 compute-0 ceph-mon[74327]: pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 5.7 KiB/s wr, 245 op/s
Dec 06 09:42:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:42:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 06 09:42:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec 06 09:42:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:42:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec 06 09:42:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 224 op/s
Dec 06 09:42:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:11 compute-0 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-1.jmdafd on compute-1
Dec 06 09:42:12 compute-0 ceph-mon[74327]: pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 224 op/s
Dec 06 09:42:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 KiB/s wr, 197 op/s
Dec 06 09:42:14 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 15 completed events
Dec 06 09:42:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:14 compute-0 ceph-mon[74327]: pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 KiB/s wr, 197 op/s
Dec 06 09:42:14 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 4.0 KiB/s wr, 201 op/s
Dec 06 09:42:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:16 compute-0 ceph-mon[74327]: pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 4.0 KiB/s wr, 201 op/s
Dec 06 09:42:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:42:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:42:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:17 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec 06 09:42:17 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec 06 09:42:17 compute-0 sudo[95750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:17 compute-0 sudo[95750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:17 compute-0 sudo[95750]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:17 compute-0 sudo[95775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:17 compute-0 sudo[95775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:18 compute-0 ceph-mon[74327]: pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:18 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:18 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:18 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:18 compute-0 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-0.fzuvue on compute-0
Dec 06 09:42:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f924c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:20 compute-0 ceph-mon[74327]: pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.23411749 +0000 UTC m=+3.216548728 container create a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 systemd[1]: Started libpod-conmon-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope.
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.211918534 +0000 UTC m=+3.194349812 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 09:42:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.341398286 +0000 UTC m=+3.323829554 container init a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.355383221 +0000 UTC m=+3.337814469 container start a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.358801462 +0000 UTC m=+3.341232690 container attach a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 optimistic_turing[95966]: 0 0
Dec 06 09:42:21 compute-0 systemd[1]: libpod-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope: Deactivated successfully.
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.366208291 +0000 UTC m=+3.348639549 container died a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7539887866d61a7afe3d8aeb8f6af24f2bd3292e84750680226c1a53a2e49768-merged.mount: Deactivated successfully.
Dec 06 09:42:21 compute-0 podman[95842]: 2025-12-06 09:42:21.453173153 +0000 UTC m=+3.435604421 container remove a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be (image=quay.io/ceph/haproxy:2.3, name=optimistic_turing)
Dec 06 09:42:21 compute-0 systemd[1]: libpod-conmon-a829bebe1dce47dc94b7cde39a960aca27093416bcf814a84a0f8366163691be.scope: Deactivated successfully.
Dec 06 09:42:21 compute-0 systemd[1]: Reloading.
Dec 06 09:42:21 compute-0 systemd-rc-local-generator[96017]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:21 compute-0 systemd-sysv-generator[96020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:21 compute-0 systemd[1]: Reloading.
Dec 06 09:42:21 compute-0 systemd-rc-local-generator[96053]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:21 compute-0 systemd-sysv-generator[96057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:22 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.fzuvue for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:22 compute-0 podman[96112]: 2025-12-06 09:42:22.575789585 +0000 UTC m=+0.076452511 container create 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693a501dd3650eb7d1c9ee2f1a762126a1db1583eff246bb8100dbca7914988a/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:22 compute-0 podman[96112]: 2025-12-06 09:42:22.543517069 +0000 UTC m=+0.044180045 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 09:42:22 compute-0 podman[96112]: 2025-12-06 09:42:22.64830391 +0000 UTC m=+0.148966826 container init 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:42:22 compute-0 podman[96112]: 2025-12-06 09:42:22.659584672 +0000 UTC m=+0.160247598 container start 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:42:22 compute-0 bash[96112]: 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d
Dec 06 09:42:22 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.fzuvue for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/094222 (2) : New worker #1 (4) forked
Dec 06 09:42:22 compute-0 sudo[95775]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:22 compute-0 ceph-mon[74327]: pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 06 09:42:22 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:22 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:22 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec 06 09:42:22 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec 06 09:42:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:42:23 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:23 compute-0 ceph-mon[74327]: Deploying daemon haproxy.nfs.cephfs.compute-2.voodna on compute-2
Dec 06 09:42:23 compute-0 ceph-mon[74327]: pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:42:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:42:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:26 compute-0 ceph-mon[74327]: pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:42:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:42:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:42:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:42:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec 06 09:42:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec 06 09:42:27 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec 06 09:42:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:28 compute-0 ceph-mon[74327]: pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:42:28 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:28 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:28 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:28 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:42:28
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:42:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v46: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 09:42:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec 06 09:42:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:42:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 06 09:42:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 06 09:42:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 06 09:42:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 06 09:42:29 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 06 09:42:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:29 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:29 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:29 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:29 compute-0 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-1.uzbtlt on compute-1
Dec 06 09:42:29 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 06 09:42:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 06 09:42:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 06 09:42:30 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 06 09:42:30 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:30 compute-0 ceph-mon[74327]: pgmap v46: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:42:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 06 09:42:30 compute-0 ceph-mon[74327]: osdmap e52: 3 total, 3 up, 3 in
Dec 06 09:42:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:30 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:30 compute-0 ceph-mon[74327]: osdmap e53: 3 total, 3 up, 3 in
Dec 06 09:42:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v49: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec 06 09:42:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 06 09:42:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 06 09:42:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 06 09:42:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 06 09:42:31 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 06 09:42:31 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 06 09:42:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=12.839952469s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active pruub 187.441436768s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=12.839952469s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown pruub 187.441436768s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 06 09:42:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 06 09:42:32 compute-0 ceph-mon[74327]: osdmap e54: 3 total, 3 up, 3 in
Dec 06 09:42:32 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:32 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 06 09:42:32 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:32 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.0( empty local-lis/les=54/55 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 06 09:42:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 06 09:42:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v52: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:42:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 06 09:42:33 compute-0 ceph-mon[74327]: pgmap v49: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 06 09:42:33 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:33 compute-0 ceph-mon[74327]: osdmap e55: 3 total, 3 up, 3 in
Dec 06 09:42:33 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 06 09:42:33 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 06 09:42:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:33 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 06 09:42:33 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 06 09:42:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec 06 09:42:34 compute-0 sudo[96141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:34 compute-0 sudo[96141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:34 compute-0 sudo[96141]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:34 compute-0 sudo[96166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:34 compute-0 sudo[96166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 7.16 scrub starts
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 7.16 scrub ok
Dec 06 09:42:34 compute-0 ceph-mon[74327]: pgmap v52: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 6.c scrub starts
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 6.c scrub ok
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:34 compute-0 ceph-mon[74327]: osdmap e56: 3 total, 3 up, 3 in
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 7.c scrub starts
Dec 06 09:42:34 compute-0 ceph-mon[74327]: 7.c scrub ok
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 06 09:42:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 06 09:42:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 06 09:42:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v55: 244 pgs: 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 06 09:42:35 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953510284s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057052612s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.949063301s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.052627563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953651428s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057296753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953392982s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057052612s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.948952675s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.052627563s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953621864s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057296753s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953528404s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057434082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953548431s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057525635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953459740s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057434082s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953528404s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057525635s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953630447s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057693481s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953598022s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057693481s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953407288s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057586670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953390121s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057586670s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953093529s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057647705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952907562s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057083130s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.957175255s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061782837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.953067780s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057647705s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.957158089s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061782837s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952960014s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057785034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952484131s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057083130s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952932358s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057769775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952944756s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057785034s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.952902794s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057769775s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956356049s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061447144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956339836s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061447144s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955930710s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061218262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955874443s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061218262s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955976486s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061584473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956527710s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955955505s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061584473s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956453323s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062194824s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955749512s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061660767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955728531s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061660767s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.951803207s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.057800293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.951780319s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.057800293s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956185341s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062377930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955442429s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.061645508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.956168175s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062377930s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[10.0( v 51'1027 (0'0,51'1027] local-lis/les=45/46 n=178 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=14.998137474s) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 51'1026 active pruub 193.104400635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955418587s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.061645508s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955781937s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 active pruub 191.062271118s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=54/55 n=0 ec=54/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955761909s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 191.062271118s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event cdc8c502-ca9c-4899-a366-64f1ce8e52db (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event bfa2975c-8764-478e-9bbb-5a32e2b80a95 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 60d5fd36-984b-4eab-a302-a71ae27a4250 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 0c16cd8d-5e6d-4b8a-aca7-6500be337c49 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 1bc71b19-98e7-4226-b8eb-91d69d843741 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 4397fc66-c55d-479e-ab63-e1f82d644844 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec 06 09:42:35 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event ae258c83-ccb1-42e3-8a21-9cea967bd3ac (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:35 compute-0 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-0.ylrrzf on compute-0
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 6.8 scrub starts
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 6.8 scrub ok
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:35 compute-0 ceph-mon[74327]: osdmap e57: 3 total, 3 up, 3 in
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 7.15 scrub starts
Dec 06 09:42:35 compute-0 ceph-mon[74327]: 7.15 scrub ok
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.8( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.19( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.17( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 58 pg[10.0( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=14.998137474s) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 0'0 unknown pruub 193.104400635s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79f68 space 0x55fcdf8f2eb0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79ce8 space 0x55fcdf36e9d0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62e88 space 0x55fcdf5b0d10 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa49748 space 0x55fcdf9bb6d0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e2a8 space 0x55fcdf9bba10 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdf7dee88 space 0x55fcdf5b0aa0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91c48 space 0x55fcdf9ec760 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f428 space 0x55fcdf9bbae0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e348 space 0x55fcdf39f600 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f1a8 space 0x55fcdf9ec0e0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62668 space 0x55fcdf5b0f80 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5fec8 space 0x55fcdf9ec5c0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91ba8 space 0x55fcdf9ecde0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa78168 space 0x55fcdf9bb600 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5fa68 space 0x55fcdf9bb940 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63a68 space 0x55fcdfad7870 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa90a28 space 0x55fcdf9ed390 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f928 space 0x55fcdf9ed1f0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa62ca8 space 0x55fcdf5b0de0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa40028 space 0x55fcdf5b09d0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdf3a00c8 space 0x55fcdfad7ae0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5e3e8 space 0x55fcdf9ec350 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa79d88 space 0x55fcdf5b0b70 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdede5568 space 0x55fcdf9bb530 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63248 space 0x55fcdf9ec900 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa5f888 space 0x55fcdfad7530 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa417e8 space 0x55fcdf998de0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa91608 space 0x55fcdf9ec690 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa48988 space 0x55fcdf7ef6d0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55fcdf128900) operator()   moving buffer(0x55fcdfa63748 space 0x55fcdf5b0eb0 0x0~1000 clean)
Dec 06 09:42:35 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 06 09:42:35 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 06 09:42:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 06 09:42:36 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 06 09:42:36 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 06 09:42:36 compute-0 ceph-mon[74327]: pgmap v55: 244 pgs: 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 09:42:36 compute-0 ceph-mon[74327]: 6.9 scrub starts
Dec 06 09:42:36 compute-0 ceph-mon[74327]: 6.9 scrub ok
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 09:42:36 compute-0 ceph-mon[74327]: osdmap e58: 3 total, 3 up, 3 in
Dec 06 09:42:36 compute-0 ceph-mon[74327]: 7.1c scrub starts
Dec 06 09:42:36 compute-0 ceph-mon[74327]: 7.1c scrub ok
Dec 06 09:42:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 06 09:42:36 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1b( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.18( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.11( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.7( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.9( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.12( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.10( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1f( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1e( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1d( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1c( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1a( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.19( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.6( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.5( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.4( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.b( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.3( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.8( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.d( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.a( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.c( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.e( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.f( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.2( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.13( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.14( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.15( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.17( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.16( v 51'1027 lc 0'0 (0'0,51'1027] local-lis/les=45/46 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.e( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.11( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.10( v 57'45 lc 51'14 (0'0,57'45] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=57'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.12( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.15( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.d( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.f( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.a( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.18( v 51'44 lc 51'18 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.1b( v 51'44 lc 51'8 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=58/59 n=1 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.6( v 44'12 lc 44'8 (0'0,44'12] local-lis/les=58/59 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.5( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.4( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.0( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 51'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=58/59 n=0 ec=56/40 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 59 pg[9.10( v 44'12 (0'0,44'12] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v58: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 09:42:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 06 09:42:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.653237268 +0000 UTC m=+2.792303254 container create 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.63837918 +0000 UTC m=+2.777445196 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 09:42:37 compute-0 systemd[1]: Started libpod-conmon-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope.
Dec 06 09:42:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.751122043 +0000 UTC m=+2.890188129 container init 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, release=1793, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.765739114 +0000 UTC m=+2.904805130 container start 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1793)
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.770131182 +0000 UTC m=+2.909197258 container attach 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc.)
Dec 06 09:42:37 compute-0 sleepy_pike[96329]: 0 0
Dec 06 09:42:37 compute-0 systemd[1]: libpod-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope: Deactivated successfully.
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.774871469 +0000 UTC m=+2.913937475 container died 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Dec 06 09:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c7f203e7f23d4a74df742a6178e9bec97dd0ab9ef690e177861e2eb1170b30e-merged.mount: Deactivated successfully.
Dec 06 09:42:37 compute-0 podman[96233]: 2025-12-06 09:42:37.835430643 +0000 UTC m=+2.974496659 container remove 536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_pike, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec 06 09:42:37 compute-0 systemd[1]: libpod-conmon-536cb7bd77ebe1b662b9aded4d82d76fdb4f02ecfd1d132b2ada79dbfc1ab990.scope: Deactivated successfully.
Dec 06 09:42:37 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Dec 06 09:42:37 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Dec 06 09:42:37 compute-0 systemd[1]: Reloading.
Dec 06 09:42:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 06 09:42:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 06 09:42:37 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 06 09:42:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 60 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60 pruub=8.863116264s) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active pruub 189.172805786s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 60 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60 pruub=8.863116264s) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown pruub 189.172805786s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:38 compute-0 ceph-mon[74327]: 9.14 scrub starts
Dec 06 09:42:38 compute-0 ceph-mon[74327]: 9.14 scrub ok
Dec 06 09:42:38 compute-0 ceph-mon[74327]: 7.12 scrub starts
Dec 06 09:42:38 compute-0 ceph-mon[74327]: 7.12 scrub ok
Dec 06 09:42:38 compute-0 ceph-mon[74327]: osdmap e59: 3 total, 3 up, 3 in
Dec 06 09:42:38 compute-0 ceph-mon[74327]: pgmap v58: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 09:42:38 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:38 compute-0 systemd-rc-local-generator[96376]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:38 compute-0 systemd-sysv-generator[96380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:38 compute-0 systemd[1]: Reloading.
Dec 06 09:42:38 compute-0 systemd-rc-local-generator[96415]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:38 compute-0 systemd-sysv-generator[96420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:38 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.ylrrzf for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:38 compute-0 podman[96477]: 2025-12-06 09:42:38.838195271 +0000 UTC m=+0.063274778 container create d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4)
Dec 06 09:42:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Dec 06 09:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6fb7e0d8db1de1a51ec46a29d871d2c8acb20ef652492c70bda017a34640e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.0 deep-scrub ok
Dec 06 09:42:38 compute-0 podman[96477]: 2025-12-06 09:42:38.907172891 +0000 UTC m=+0.132252488 container init d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 06 09:42:38 compute-0 podman[96477]: 2025-12-06 09:42:38.817991789 +0000 UTC m=+0.043071336 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 09:42:38 compute-0 podman[96477]: 2025-12-06 09:42:38.916898781 +0000 UTC m=+0.141978308 container start d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64)
Dec 06 09:42:38 compute-0 bash[96477]: d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f
Dec 06 09:42:38 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.ylrrzf for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Starting VRRP child process, pid=4
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: Startup complete
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: (VI_0) Entering BACKUP STATE (init)
Dec 06 09:42:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:38 2025: VRRP_Script(check_backend) succeeded
Dec 06 09:42:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 06 09:42:38 compute-0 sudo[96166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 725 B/s rd, 0 op/s
Dec 06 09:42:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 06 09:42:39 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.15( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.9( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.d( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.5( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1f( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1b( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.18( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.16( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.14( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.f( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=49/50 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.15( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.5( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.0( empty local-lis/les=60/61 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1f( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.16( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.14( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.f( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=49/49 les/c/f=50/50/0 sis=60) [1] r=0 lpr=60 pi=[49,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 22 completed events
Dec 06 09:42:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:39 compute-0 ceph-mon[74327]: 9.2 scrub starts
Dec 06 09:42:39 compute-0 ceph-mon[74327]: 9.2 scrub ok
Dec 06 09:42:39 compute-0 ceph-mon[74327]: 7.17 deep-scrub starts
Dec 06 09:42:39 compute-0 ceph-mon[74327]: 7.17 deep-scrub ok
Dec 06 09:42:39 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 09:42:39 compute-0 ceph-mon[74327]: osdmap e60: 3 total, 3 up, 3 in
Dec 06 09:42:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec 06 09:42:39 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec 06 09:42:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 06 09:42:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 06 09:42:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.6 scrub starts
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.6 scrub ok
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 7.0 deep-scrub starts
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 7.0 deep-scrub ok
Dec 06 09:42:40 compute-0 ceph-mon[74327]: pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 725 B/s rd, 0 op/s
Dec 06 09:42:40 compute-0 ceph-mon[74327]: osdmap e61: 3 total, 3 up, 3 in
Dec 06 09:42:40 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:40 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:40 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.f scrub starts
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.f scrub ok
Dec 06 09:42:40 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 06 09:42:40 compute-0 ceph-mon[74327]: Deploying daemon keepalived.nfs.cephfs.compute-2.whsrlg on compute-2
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.4 scrub starts
Dec 06 09:42:40 compute-0 ceph-mon[74327]: 6.4 scrub ok
Dec 06 09:42:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 06 09:42:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 06 09:42:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 keys/s, 3 objects/s recovering
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885075569s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.367706299s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885017395s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.367706299s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828448296s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311187744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828311920s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311187744s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887252808s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370223999s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887207031s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370223999s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828056335s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311126709s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.828003883s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311126709s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886998177s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370239258s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886981964s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370239258s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827672958s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311065674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827654839s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311065674s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887239456s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370712280s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886943817s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370452881s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887214661s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370712280s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886919975s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370452881s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887412071s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371078491s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887392998s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371078491s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827157974s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887278557s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371185303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.827139854s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887115479s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371124268s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887250900s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371185303s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.887094498s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371124268s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886641502s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.370697021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886605263s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.370697021s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886935234s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371154785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886913300s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371154785s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826405525s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310913086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826385498s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310913086s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886591911s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371231079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886525154s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371200562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886573792s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371231079s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886501312s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371200562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826035500s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310882568s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.826019287s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310882568s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825989723s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310928345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825959206s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310928345s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886281967s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371292114s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886356354s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371398926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886339188s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371398926s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825742722s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'1028 lcod 59'1029 mlcod 59'1029 active pruub 195.310867310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825701714s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 unknown NOTIFY pruub 195.310867310s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825399399s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310806274s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886258125s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371292114s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886066437s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371490479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.825379372s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310806274s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.886047363s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371490479s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885918617s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.371459961s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.885900497s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.371459961s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824880600s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310684204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891217232s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377014160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824854851s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310684204s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891183853s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377014160s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824689865s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310638428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824667931s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310638428s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890976906s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.376953125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824485779s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310668945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890837669s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.376953125s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824462891s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310668945s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890755653s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.376983643s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890736580s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.376983643s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824155807s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310455322s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.824110031s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310455322s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820496559s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.306900024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820478439s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.306900024s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890796661s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377243042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.890778542s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377243042s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891049385s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377578735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.823896408s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310440063s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.891033173s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377578735s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.823879242s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310440063s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820156097s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.306838989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=62 pruub=11.820139885s) [2] r=-1 lpr=62 pi=[58,62)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.306838989s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.889799118s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 197.377243042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/49 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=13.889700890s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 197.377243042s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:41 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 7.7 scrub starts
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 7.7 scrub ok
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 8.d scrub starts
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 8.d scrub ok
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 6.0 scrub starts
Dec 06 09:42:41 compute-0 ceph-mon[74327]: 6.0 scrub ok
Dec 06 09:42:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.6( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.2( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.e( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 06 09:42:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 06 09:42:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 06 09:42:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 06 09:42:42 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=59'1028 lcod 59'1029 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 7.1 scrub starts
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 7.1 scrub ok
Dec 06 09:42:42 compute-0 ceph-mon[74327]: pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 keys/s, 3 objects/s recovering
Dec 06 09:42:42 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 09:42:42 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:42 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 09:42:42 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:42:42 compute-0 ceph-mon[74327]: osdmap e62: 3 total, 3 up, 3 in
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 9.18 scrub starts
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 9.18 scrub ok
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 8.e scrub starts
Dec 06 09:42:42 compute-0 ceph-mon[74327]: 8.e scrub ok
Dec 06 09:42:42 compute-0 ceph-mon[74327]: osdmap e63: 3 total, 3 up, 3 in
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.a( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.2( v 50'39 (0'0,50'39] local-lis/les=62/63 n=2 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=62/63 n=1 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.e( v 50'39 lc 48'19 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[11.14( v 61'51 lc 48'43 (0'0,61'51] local-lis/les=62/63 n=0 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=61'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 63 pg[6.6( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=62) [1] r=0 lpr=62 pi=[54,62)/1 crt=50'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:42 compute-0 sudo[96525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqbrgberllxxjhkhqaikrjetwsrpemxb ; /usr/bin/python3'
Dec 06 09:42:42 compute-0 sudo[96525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:42 compute-0 python3[96527]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:42 compute-0 podman[96528]: 2025-12-06 09:42:42.437783489 +0000 UTC m=+0.043319563 container create 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:42:42 compute-0 systemd[1]: Started libpod-conmon-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope.
Dec 06 09:42:42 compute-0 podman[96528]: 2025-12-06 09:42:42.420080155 +0000 UTC m=+0.025616249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:42:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:42 compute-0 podman[96528]: 2025-12-06 09:42:42.556231225 +0000 UTC m=+0.161767299 container init 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:42:42 compute-0 podman[96528]: 2025-12-06 09:42:42.564416885 +0000 UTC m=+0.169952959 container start 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:42:42 compute-0 podman[96528]: 2025-12-06 09:42:42.56869778 +0000 UTC m=+0.174233854 container attach 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:42:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:42 2025: (VI_0) Entering MASTER STATE
Dec 06 09:42:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.19 deep-scrub starts
Dec 06 09:42:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.19 deep-scrub ok
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 1 keys/s, 3 objects/s recovering
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 09:42:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.b( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.f( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.7( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[6.3( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.789395332s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311187744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.789361000s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311187744s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788603783s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311019897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788549423s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311019897s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788349152s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.788297653s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.787719727s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310928345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.787698746s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310928345s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786976814s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310775757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786928177s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310775757s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786486626s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310760498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786152840s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310668945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786112785s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310668945s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.786048889s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310760498s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.785719872s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 195.310623169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=9.785683632s) [0] r=-1 lpr=64 pi=[58,64)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.310623169s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.5( v 61'1030 (0'0,61'1030] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=61'1030 lcod 59'1029 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 64 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[58,63)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 7.d scrub starts
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 7.d scrub ok
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 8.16 scrub starts
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 8.16 scrub ok
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 9.c deep-scrub starts
Dec 06 09:42:43 compute-0 ceph-mon[74327]: 9.c deep-scrub ok
Dec 06 09:42:43 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 09:42:43 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 09:42:43 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 09:42:43 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 09:42:43 compute-0 ceph-mon[74327]: osdmap e64: 3 total, 3 up, 3 in
Dec 06 09:42:43 compute-0 silly_noether[96543]: could not fetch user info: no user info saved
Dec 06 09:42:43 compute-0 systemd[1]: libpod-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope: Deactivated successfully.
Dec 06 09:42:43 compute-0 conmon[96543]: conmon 0d3d6bdb46ceb7f67e9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope/container/memory.events
Dec 06 09:42:43 compute-0 podman[96528]: 2025-12-06 09:42:43.338392637 +0000 UTC m=+0.943928721 container died 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a2c6db28623a256a659c78dd640bed5d1bdc5318f86bb4efb5d886f330cc9d3-merged.mount: Deactivated successfully.
Dec 06 09:42:43 compute-0 podman[96528]: 2025-12-06 09:42:43.390883385 +0000 UTC m=+0.996419469 container remove 0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d (image=quay.io/ceph/ceph:v19, name=silly_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:42:43 compute-0 systemd[1]: libpod-conmon-0d3d6bdb46ceb7f67e9c3ef521bb909e8ff9497f9f38afc2527db0cea5120f3d.scope: Deactivated successfully.
Dec 06 09:42:43 compute-0 sudo[96525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:43 compute-0 sudo[96664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxbzycdabhuthsofilopnpgdjgsmzmhs ; /usr/bin/python3'
Dec 06 09:42:43 compute-0 sudo[96664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:43 compute-0 python3[96666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event de016995-7e5d-4275-960f-5b2b33bc5989 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 33 seconds
Dec 06 09:42:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec 06 09:42:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1))
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec 06 09:42:43 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec 06 09:42:43 compute-0 podman[96667]: 2025-12-06 09:42:43.855067502 +0000 UTC m=+0.072364792 container create bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 09:42:43 compute-0 systemd[1]: Started libpod-conmon-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope.
Dec 06 09:42:43 compute-0 podman[96667]: 2025-12-06 09:42:43.829033084 +0000 UTC m=+0.046330414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:42:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:43 compute-0 sudo[96680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:43 compute-0 sudo[96680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:43 compute-0 sudo[96680]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:43 compute-0 podman[96667]: 2025-12-06 09:42:43.956711957 +0000 UTC m=+0.174009277 container init bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:42:43 compute-0 podman[96667]: 2025-12-06 09:42:43.969293305 +0000 UTC m=+0.186590595 container start bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:42:43 compute-0 podman[96667]: 2025-12-06 09:42:43.974798513 +0000 UTC m=+0.192095833 container attach bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:42:44 compute-0 sudo[96710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:44 compute-0 sudo[96710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 06 09:42:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 06 09:42:44 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 23 completed events
Dec 06 09:42:44 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 06 09:42:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009078026s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531784058s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.15( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008981705s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531784058s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009040833s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531906128s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.13( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008938789s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531906128s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.009285927s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531967163s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008082390s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531616211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008004189s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531616211s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.008171082s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531967163s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007225990s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531600952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007121086s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531600952s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.007177353s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.531723022s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.006881714s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.531723022s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.5( v 64'1034 (0'0,64'1034] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.002076149s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=61'1030 lcod 64'1033 mlcod 64'1033 active pruub 201.527008057s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.5( v 64'1034 (0'0,64'1034] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001951218s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=61'1030 lcod 64'1033 mlcod 0'0 unknown NOTIFY pruub 201.527008057s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001643181s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527099609s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001572609s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527099609s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.002022743s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527618408s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000745773s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526901245s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.3( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.001730919s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527618408s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000676155s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526901245s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000605583s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527404785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998142242s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.525177002s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=15.000528336s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527404785s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998067856s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.525177002s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999908447s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527114868s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999868393s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527114868s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999295235s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526809692s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.11( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999224663s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526809692s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.999085426s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.527191162s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998382568s) [2] async=[2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 201.526733398s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=5 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998795509s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.527191162s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=63/64 n=6 ec=58/45 lis/c=63/58 les/c/f=64/59/0 sis=65 pruub=14.998186111s) [2] r=-1 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.526733398s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.3( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=64/65 n=2 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.7( v 50'39 lc 48'20 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.f( v 50'39 lc 48'1 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:44 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 65 pg[6.b( v 50'39 lc 0'0 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=64) [1] r=0 lpr=64 pi=[58,64)/1 crt=50'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 40220b6f-6097-4335-9da7-9e13df932a5c (Global Recovery Event) in 10 seconds
Dec 06 09:42:44 compute-0 ceph-mon[74327]: 7.19 deep-scrub starts
Dec 06 09:42:44 compute-0 ceph-mon[74327]: 7.19 deep-scrub ok
Dec 06 09:42:44 compute-0 ceph-mon[74327]: pgmap v65: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 1 keys/s, 3 objects/s recovering
Dec 06 09:42:44 compute-0 ceph-mon[74327]: 9.0 scrub starts
Dec 06 09:42:44 compute-0 ceph-mon[74327]: 9.0 scrub ok
Dec 06 09:42:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-mon[74327]: osdmap e65: 3 total, 3 up, 3 in
Dec 06 09:42:44 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 722 B/s, 2 keys/s, 23 objects/s recovering
Dec 06 09:42:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 06 09:42:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 06 09:42:45 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=65) [0]/[1] async=[0] r=0 lpr=65 pi=[58,65)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:45 compute-0 ceph-mon[74327]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 06 09:42:45 compute-0 ceph-mon[74327]: 9.1 deep-scrub starts
Dec 06 09:42:45 compute-0 ceph-mon[74327]: 9.1 deep-scrub ok
Dec 06 09:42:45 compute-0 ceph-mon[74327]: osdmap e66: 3 total, 3 up, 3 in
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]: {
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "user_id": "openstack",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "display_name": "openstack",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "email": "",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "suspended": 0,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "max_buckets": 1000,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "subusers": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "keys": [
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         {
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:             "user": "openstack",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:             "access_key": "Y0BEIM7RZZC67P1B4QTT",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:             "secret_key": "QWkZChaKG8LtAwCXnQ83vi9JO4rkOzAfCx5grxQK",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:             "active": true,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:             "create_date": "2025-12-06T09:42:45.291408Z"
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         }
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     ],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "swift_keys": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "caps": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "op_mask": "read, write, delete",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "default_placement": "",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "default_storage_class": "",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "placement_tags": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "bucket_quota": {
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "enabled": false,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "check_on_raw": false,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_size": -1,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_size_kb": 0,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_objects": -1
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     },
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "user_quota": {
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "enabled": false,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "check_on_raw": false,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_size": -1,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_size_kb": 0,
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:         "max_objects": -1
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     },
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "temp_url_keys": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "type": "rgw",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "mfa_ids": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "account_id": "",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "path": "/",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "create_date": "2025-12-06T09:42:45.290435Z",
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "tags": [],
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]:     "group_ids": []
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]: }
Dec 06 09:42:45 compute-0 gracious_blackwell[96705]: 
Dec 06 09:42:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 06 09:42:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 06 09:42:45 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 67 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=67 pruub=15.884645462s) [0] async=[0] r=-1 lpr=67 pi=[58,67)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.544784546s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 67 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=67 pruub=15.884576797s) [0] r=-1 lpr=67 pi=[58,67)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.544784546s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:45 compute-0 systemd[1]: libpod-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope: Deactivated successfully.
Dec 06 09:42:45 compute-0 podman[96667]: 2025-12-06 09:42:45.967743441 +0000 UTC m=+2.185040721 container died bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:42:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07861bd24af8f284c0de9b8507b945a64416f6028d60acd26b2f9e8825ddd79-merged.mount: Deactivated successfully.
Dec 06 09:42:46 compute-0 podman[96667]: 2025-12-06 09:42:46.293664819 +0000 UTC m=+2.510962099 container remove bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2 (image=quay.io/ceph/ceph:v19, name=gracious_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:42:46 compute-0 systemd[1]: libpod-conmon-bbe7d9d752ca6b28c786193e748b4599e62ae6cc0b8b6d09bb9a379ecb2618e2.scope: Deactivated successfully.
Dec 06 09:42:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 06 09:42:46 compute-0 sudo[96664]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 06 09:42:46 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 06 09:42:46 compute-0 ceph-mon[74327]: pgmap v68: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 722 B/s, 2 keys/s, 23 objects/s recovering
Dec 06 09:42:46 compute-0 ceph-mon[74327]: osdmap e67: 3 total, 3 up, 3 in
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878108978s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548782349s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877959251s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548721313s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878348351s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549118042s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878028870s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548782349s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878499985s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549407959s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.12( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877865791s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548721313s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878448486s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549407959s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878067017s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549087524s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.878017426s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549087524s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877964020s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549118042s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877978325s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.549209595s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877589226s) [0] async=[0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 203.548934937s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.2( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=6 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877914429s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.549209595s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 68 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=65/66 n=5 ec=58/45 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.877550125s) [0] r=-1 lpr=68 pi=[58,68)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.548934937s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.360200624 +0000 UTC m=+1.920910748 volume create 55bf1e0cfc98ad90888b42fa4ce9bd26d0941c436cb72af7b9d3cb62ff298b73
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.368348832 +0000 UTC m=+1.929058956 container create 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.338349918 +0000 UTC m=+1.899060122 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:42:46 compute-0 systemd[1]: Started libpod-conmon-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope.
Dec 06 09:42:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d491c645902ec7875796dbf6576d3bd2d10093445a7eeec1f1abef0ca1976926/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.45401817 +0000 UTC m=+2.014728304 container init 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.460642427 +0000 UTC m=+2.021352551 container start 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 vibrant_bhabha[97017]: 65534 65534
Dec 06 09:42:46 compute-0 systemd[1]: libpod-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope: Deactivated successfully.
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.466546845 +0000 UTC m=+2.027256969 container attach 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.467208893 +0000 UTC m=+2.027919077 container died 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d491c645902ec7875796dbf6576d3bd2d10093445a7eeec1f1abef0ca1976926-merged.mount: Deactivated successfully.
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.900110341 +0000 UTC m=+2.460820465 container remove 4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:46 compute-0 podman[96854]: 2025-12-06 09:42:46.905937237 +0000 UTC m=+2.466647431 volume remove 55bf1e0cfc98ad90888b42fa4ce9bd26d0941c436cb72af7b9d3cb62ff298b73
Dec 06 09:42:46 compute-0 systemd[1]: libpod-conmon-4a3db88b47865d815e5b5ef61e08dd7b0a13878f5794c99012c308ac35151575.scope: Deactivated successfully.
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.00488526 +0000 UTC m=+0.062972379 volume create 0e5088438877dddb2afc78eb30ca20bf07027d633261d26e8773c98535dd080e
Dec 06 09:42:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 757 B/s, 2 keys/s, 24 objects/s recovering
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.022013299 +0000 UTC m=+0.080100378 container create dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 python3[97056]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:47 compute-0 systemd[1]: Started libpod-conmon-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope.
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:46.986079785 +0000 UTC m=+0.044166904 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:42:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/603cea872761c38c779da728e65dac381864c16a1e05c3dd744cf4f1a8953f17/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.121514768 +0000 UTC m=+0.179601857 container init dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.128085333 +0000 UTC m=+0.186172462 container start dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 happy_chatterjee[97076]: 65534 65534
Dec 06 09:42:47 compute-0 systemd[1]: libpod-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope: Deactivated successfully.
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.13316056 +0000 UTC m=+0.191247659 container attach dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.133956441 +0000 UTC m=+0.192043560 container died dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-603cea872761c38c779da728e65dac381864c16a1e05c3dd744cf4f1a8953f17-merged.mount: Deactivated successfully.
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.197759182 +0000 UTC m=+0.255846311 container remove dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_chatterjee, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:47 compute-0 podman[97058]: 2025-12-06 09:42:47.203831965 +0000 UTC m=+0.261919134 volume remove 0e5088438877dddb2afc78eb30ca20bf07027d633261d26e8773c98535dd080e
Dec 06 09:42:47 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:43702] [GET] [200] [0.129s] [6.3K] [1d75e518-79d6-4695-9ca7-e976e7bffe43] /
Dec 06 09:42:47 compute-0 systemd[1]: libpod-conmon-dfb5f1c99dcc4d3051894fd4cde580b2aac11d754d677f986e928a4423ed9324.scope: Deactivated successfully.
Dec 06 09:42:47 compute-0 systemd[1]: Reloading.
Dec 06 09:42:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 06 09:42:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 06 09:42:47 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 06 09:42:47 compute-0 ceph-mon[74327]: 10.1f scrub starts
Dec 06 09:42:47 compute-0 ceph-mon[74327]: 10.1f scrub ok
Dec 06 09:42:47 compute-0 ceph-mon[74327]: osdmap e68: 3 total, 3 up, 3 in
Dec 06 09:42:47 compute-0 ceph-mon[74327]: 10.1a scrub starts
Dec 06 09:42:47 compute-0 ceph-mon[74327]: 10.1a scrub ok
Dec 06 09:42:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:42:47 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec 06 09:42:47 compute-0 systemd-rc-local-generator[97126]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:47 compute-0 systemd-sysv-generator[97130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:47 compute-0 python3[97152]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:47 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:43712] [GET] [200] [0.003s] [6.3K] [a77a6587-a07c-471d-8155-1790bf33a6b0] /
Dec 06 09:42:47 compute-0 systemd[1]: Reloading.
Dec 06 09:42:47 compute-0 systemd-rc-local-generator[97186]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:47 compute-0 systemd-sysv-generator[97190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:47 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9240001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:48 compute-0 podman[97244]: 2025-12-06 09:42:48.316282593 +0000 UTC m=+0.059299671 volume create cc9140d1b399a34df664d17bf3d5da457ec5a14a1279788aa2852185673a3bfd
Dec 06 09:42:48 compute-0 podman[97244]: 2025-12-06 09:42:48.329363744 +0000 UTC m=+0.072380802 container create b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: pgmap v72: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 3 op/s; 1/226 objects misplaced (0.442%); 757 B/s, 2 keys/s, 24 objects/s recovering
Dec 06 09:42:48 compute-0 ceph-mon[74327]: 10.11 scrub starts
Dec 06 09:42:48 compute-0 ceph-mon[74327]: 10.11 scrub ok
Dec 06 09:42:48 compute-0 ceph-mon[74327]: osdmap e69: 3 total, 3 up, 3 in
Dec 06 09:42:48 compute-0 ceph-mon[74327]: 10.16 scrub starts
Dec 06 09:42:48 compute-0 ceph-mon[74327]: 10.16 scrub ok
Dec 06 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:48 compute-0 podman[97244]: 2025-12-06 09:42:48.299734509 +0000 UTC m=+0.042751607 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:42:48 compute-0 podman[97244]: 2025-12-06 09:42:48.410835419 +0000 UTC m=+0.153852497 container init b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:48 compute-0 podman[97244]: 2025-12-06 09:42:48.418300679 +0000 UTC m=+0.161317737 container start b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:42:48 compute-0 bash[97244]: b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b
Dec 06 09:42:48 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.462Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.462Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.476Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.478Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 06 09:42:48 compute-0 sudo[96710]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.525Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.527Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.533Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:48.533Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1))
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event c64c0c91-5f30-4161-a910-ead2e7fb7a40 (Updating alertmanager deployment (+1 -> 1)) in 5 seconds
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec 06 09:42:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1))
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 06 09:42:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec 06 09:42:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec 06 09:42:48 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec 06 09:42:48 compute-0 sudo[97280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:48 compute-0 sudo[97280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:48 compute-0 sudo[97280]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:48 compute-0 sudo[97305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:48 compute-0 sudo[97305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:48 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 06 09:42:49 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 06 09:42:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1/226 objects misplaced (0.442%)
Dec 06 09:42:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:49 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 25 completed events
Dec 06 09:42:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:42:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 06 09:42:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 06 09:42:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mgr[74618]: [progress WARNING root] Starting Global Recovery Event,25 pgs not in active + clean state
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 10.13 scrub starts
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 10.13 scrub ok
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 10.e scrub starts
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 10.e scrub ok
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: Regenerating cephadm self-signed grafana TLS certificates
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 06 09:42:50 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:50 compute-0 ceph-mon[74327]: Deploying daemon grafana.compute-0 on compute-0
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 7.1a scrub starts
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 7.1a scrub ok
Dec 06 09:42:50 compute-0 ceph-mon[74327]: pgmap v74: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovering, 312 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1/226 objects misplaced (0.442%)
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 8.15 scrub starts
Dec 06 09:42:50 compute-0 ceph-mon[74327]: 8.15 scrub ok
Dec 06 09:42:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:50.479Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00046291s
Dec 06 09:42:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 06 09:42:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 06 09:42:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 06 09:42:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 06 09:42:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 09:42:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 06 09:42:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 09:42:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 06 09:42:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 09:42:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 09:42:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 06 09:42:51 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 10.a deep-scrub starts
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 10.a deep-scrub ok
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 8.8 scrub starts
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 8.8 scrub ok
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 8.f scrub starts
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 8.f scrub ok
Dec 06 09:42:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 10.2 scrub starts
Dec 06 09:42:51 compute-0 ceph-mon[74327]: 10.2 scrub ok
Dec 06 09:42:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 09:42:51 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 09:42:51 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 06 09:42:51 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 06 09:42:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 9.11 scrub starts
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 9.11 scrub ok
Dec 06 09:42:52 compute-0 ceph-mon[74327]: pgmap v75: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 9.1d scrub starts
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 9.1d scrub ok
Dec 06 09:42:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 09:42:52 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 09:42:52 compute-0 ceph-mon[74327]: osdmap e70: 3 total, 3 up, 3 in
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 8.1 scrub starts
Dec 06 09:42:52 compute-0 ceph-mon[74327]: 8.1 scrub ok
Dec 06 09:42:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c000d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:52 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 06 09:42:52 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 06 09:42:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:42:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 06 09:42:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 09:42:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 06 09:42:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 09:42:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 06 09:42:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 09:42:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 09:42:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 06 09:42:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.601285934s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311798096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.601237297s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311798096s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.600256920s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311447144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.600227356s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311447144s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599624634s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=66'1034 lcod 66'1033 mlcod 66'1033 active pruub 211.311111450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599576950s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 unknown NOTIFY pruub 211.311111450s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.599024773s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 211.311080933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=71 pruub=15.598998070s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.311080933s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[6.5( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 71 pg[6.d( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 8.14 scrub starts
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 8.14 scrub ok
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 9.13 deep-scrub starts
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 9.13 deep-scrub ok
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 8.0 scrub starts
Dec 06 09:42:53 compute-0 ceph-mon[74327]: 8.0 scrub ok
Dec 06 09:42:53 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 09:42:53 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 09:42:54 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 06 09:42:54 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 06 09:42:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 06 09:42:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Dec 06 09:42:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:42:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Dec 06 09:42:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 06 09:42:55 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[6.5( v 50'39 lc 48'11 (0'0,50'39] local-lis/les=71/72 n=2 ec=54/21 lis/c=58/58 les/c/f=59/60/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 9.12 scrub starts
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 9.12 scrub ok
Dec 06 09:42:55 compute-0 ceph-mon[74327]: pgmap v77: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 8.c scrub starts
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 8.c scrub ok
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 8.7 scrub starts
Dec 06 09:42:55 compute-0 ceph-mon[74327]: 8.7 scrub ok
Dec 06 09:42:55 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 09:42:55 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 09:42:55 compute-0 ceph-mon[74327]: osdmap e71: 3 total, 3 up, 3 in
Dec 06 09:42:55 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 72 pg[6.d( v 50'39 lc 48'13 (0'0,50'39] local-lis/les=71/72 n=1 ec=54/21 lis/c=58/58 les/c/f=59/59/0 sis=71) [1] r=0 lpr=71 pi=[58,71)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.308596724 +0000 UTC m=+5.757647785 container create 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:42:55 compute-0 systemd[1]: Started libpod-conmon-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope.
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.291871815 +0000 UTC m=+5.740922896 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.417113354 +0000 UTC m=+5.866164515 container init 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.431793427 +0000 UTC m=+5.880844488 container start 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.435809406 +0000 UTC m=+5.884860467 container attach 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 quirky_shockley[97593]: 472 0
Dec 06 09:42:55 compute-0 systemd[1]: libpod-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope: Deactivated successfully.
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.439654318 +0000 UTC m=+5.888705419 container died 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7345b0e7728ef9c5d02e03a7c10198624581e37250cf94428cdd6be56d331e4c-merged.mount: Deactivated successfully.
Dec 06 09:42:55 compute-0 podman[97372]: 2025-12-06 09:42:55.494177341 +0000 UTC m=+5.943228432 container remove 006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72 (image=quay.io/ceph/grafana:10.4.0, name=quirky_shockley, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: libpod-conmon-006844a330d1a7996ae9b7680e398963b951186b885ee0d7a7854889567bdd72.scope: Deactivated successfully.
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.581709608 +0000 UTC m=+0.054104572 container create b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: Started libpod-conmon-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope.
Dec 06 09:42:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.647286966 +0000 UTC m=+0.119682000 container init b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.559538383 +0000 UTC m=+0.031933407 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:42:55 compute-0 happy_raman[97627]: 472 0
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.659653298 +0000 UTC m=+0.132048262 container start b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: libpod-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope: Deactivated successfully.
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.668823513 +0000 UTC m=+0.141218587 container attach b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.669199043 +0000 UTC m=+0.141594027 container died b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-28aca0a3e4dd3272a5f192e6d68c442eda19a31b5ea64d4a15cffa5a861c0179-merged.mount: Deactivated successfully.
Dec 06 09:42:55 compute-0 podman[97609]: 2025-12-06 09:42:55.717647942 +0000 UTC m=+0.190042946 container remove b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1 (image=quay.io/ceph/grafana:10.4.0, name=happy_raman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:55 compute-0 systemd[1]: libpod-conmon-b7150d97b195544a67273e8c8a7bcc507c8d3f7bb87488c70061cd9f2739e6e1.scope: Deactivated successfully.
Dec 06 09:42:55 compute-0 systemd[1]: Reloading.
Dec 06 09:42:55 compute-0 systemd-sysv-generator[97677]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:55 compute-0 systemd-rc-local-generator[97672]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 06 09:42:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 06 09:42:56 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 06 09:42:56 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:56 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:56 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.4( v 66'1034 (0'0,66'1034] local-lis/les=72/73 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=66'1034 lcod 66'1033 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:56 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 73 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[58,72)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.d scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.d scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 8.1c scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 8.1c scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.1a scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.1a scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.15 deep-scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: pgmap v79: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.15 deep-scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.9 deep-scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 9.9 deep-scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: osdmap e72: 3 total, 3 up, 3 in
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 8.1a scrub starts
Dec 06 09:42:56 compute-0 ceph-mon[74327]: 8.1a scrub ok
Dec 06 09:42:56 compute-0 ceph-mon[74327]: osdmap e73: 3 total, 3 up, 3 in
Dec 06 09:42:56 compute-0 systemd[1]: Reloading.
Dec 06 09:42:56 compute-0 systemd-rc-local-generator[97706]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:56 compute-0 systemd-sysv-generator[97710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:56 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:56 compute-0 podman[97771]: 2025-12-06 09:42:56.723141443 +0000 UTC m=+0.068061256 container create cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:56 compute-0 podman[97771]: 2025-12-06 09:42:56.686546512 +0000 UTC m=+0.031466405 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:56 compute-0 podman[97771]: 2025-12-06 09:42:56.820626577 +0000 UTC m=+0.165546390 container init cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:56 compute-0 podman[97771]: 2025-12-06 09:42:56.825751565 +0000 UTC m=+0.170671378 container start cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:42:56 compute-0 bash[97771]: cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2
Dec 06 09:42:56 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:56 compute-0 sudo[97305]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 06 09:42:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:56 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1))
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec 06 09:42:56 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 0b16bde7-b1bb-4174-ba29-7d221cc5d567 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Dec 06 09:42:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:56 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 06 09:42:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec 06 09:42:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec 06 09:42:57 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec 06 09:42:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:42:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 06 09:42:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093638548Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-06T09:42:57Z
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093899425Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093906605Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093910435Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093914116Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093917466Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093920666Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093927016Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093931276Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093935396Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093938996Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093942156Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093945666Z level=info msg=Target target=[all]
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093955967Z level=info msg="Path Home" path=/usr/share/grafana
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093959437Z level=info msg="Path Data" path=/var/lib/grafana
Dec 06 09:42:57 compute-0 sudo[97807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093962477Z level=info msg="Path Logs" path=/var/log/grafana
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093965487Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093968757Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=settings t=2025-12-06T09:42:57.093971897Z level=info msg="App mode production"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:57.096010491Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:57.096034572Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.096915376Z level=info msg="Starting DB migrations"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.098119488Z level=info msg="Executing migration" id="create migration_log table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.099325261Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.205483ms
Dec 06 09:42:57 compute-0 sudo[97807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.1019407Z level=info msg="Executing migration" id="create user table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.102684521Z level=info msg="Migration successfully executed" id="create user table" duration=743.681µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.10454477Z level=info msg="Executing migration" id="add unique index user.login"
Dec 06 09:42:57 compute-0 sudo[97807]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.107591292Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=3.039582ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.111067865Z level=info msg="Executing migration" id="add unique index user.email"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.112458312Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.395467ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.115079703Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.118620277Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=3.538974ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.12059818Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.121213937Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=615.827µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.125066411Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.128348179Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.284098ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.130866326Z level=info msg="Executing migration" id="create user table v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.13178245Z level=info msg="Migration successfully executed" id="create user table v2" duration=915.894µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.133902327Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.134709949Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=805.922µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.136660151Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.137463683Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=802.722µs
Dec 06 09:42:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.140139295Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.140660948Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=539.144µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.143420182Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.144941604Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.521351ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.14741478Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec 06 09:42:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.149951178Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.535518ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.153079742Z level=info msg="Executing migration" id="Update user table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.153163614Z level=info msg="Migration successfully executed" id="Update user table charset" duration=86.382µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.156119513Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.158351093Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.23061ms
Dec 06 09:42:57 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.161393295Z level=info msg="Executing migration" id="Add missing user data"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.161939479Z level=info msg="Migration successfully executed" id="Add missing user data" duration=547.034µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.164730674Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.971620560s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.454605103s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.971525192s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.454605103s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.166351757Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.620473ms
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.966494560s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.449996948s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.14( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.966411591s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.449996948s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.4( v 73'1035 (0'0,73'1035] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.970807076s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=66'1034 lcod 66'1034 mlcod 66'1034 active pruub 214.454681396s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.4( v 73'1035 (0'0,73'1035] local-lis/les=72/73 n=6 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.970702171s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=66'1034 lcod 66'1034 mlcod 0'0 unknown NOTIFY pruub 214.454681396s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.969923019s) [2] async=[2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 214.454483032s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:42:57 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 74 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=72/73 n=5 ec=58/45 lis/c=72/58 les/c/f=73/59/0 sis=74 pruub=14.969791412s) [2] r=-1 lpr=74 pi=[58,74)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.454483032s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.168362761Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.169613505Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.248044ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.17169424Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 8.3 scrub starts
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 9.e scrub starts
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 8.3 scrub ok
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 9.e scrub ok
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 9.1b scrub starts
Dec 06 09:42:57 compute-0 ceph-mon[74327]: 9.1b scrub ok
Dec 06 09:42:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.172961635Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.266975ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.174913328Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.181603917Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.69036ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.183172428Z level=info msg="Executing migration" id="Add uid column to user"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.184100144Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=927.036µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.186067977Z level=info msg="Executing migration" id="Update uid column values for users"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.186229971Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=161.974µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.187887545Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.18846563Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=577.575µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.190860424Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec 06 09:42:57 compute-0 sudo[97832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.191597164Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=619.497µs
Dec 06 09:42:57 compute-0 sudo[97832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.194286826Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.195563541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.280985ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.19814104Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.198849889Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=739.42µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.202010563Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.202669661Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=658.988µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.204898621Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.205906408Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.008057ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.208681813Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.208718694Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=37.761µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.211041425Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.211959971Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=918.866µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.213972884Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.214611412Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=638.708µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.216576044Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.217257123Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=681.359µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.219314857Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.219987196Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=672.949µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.221940608Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.224749403Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.808635ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.227102866Z level=info msg="Executing migration" id="create temp_user v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.228659558Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.558562ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.231270419Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.232336777Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.078049ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.234550856Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.236079107Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.537121ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.238467641Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.239365746Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=906.505µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.241355018Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.242227472Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=871.954µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.24510303Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.245798838Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=695.979µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.247759611Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.248527481Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=767.68µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.250876004Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.251408618Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=532.164µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.253844544Z level=info msg="Executing migration" id="create star table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.25484166Z level=info msg="Migration successfully executed" id="create star table" duration=997.386µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.257014429Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.257963804Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=948.345µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.260704557Z level=info msg="Executing migration" id="create org table v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.261619281Z level=info msg="Migration successfully executed" id="create org table v1" duration=914.544µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.264224571Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.265103386Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=878.265µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.267461619Z level=info msg="Executing migration" id="create org_user table v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.26824087Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=779.021µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.270392398Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.271328522Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=936.225µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.273670605Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.27461569Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=940.995µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.278407002Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.279299966Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=892.774µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.281733472Z level=info msg="Executing migration" id="Update org table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.281762352Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.03µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.283716324Z level=info msg="Executing migration" id="Update org_user table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.283755095Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=40.141µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.285997095Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.286238182Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=240.847µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.289219422Z level=info msg="Executing migration" id="create dashboard table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.290554248Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.335006ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.293259791Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.29435753Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.097429ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.296817526Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.297895234Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.077128ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.301515332Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.302339704Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=813.712µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.304704577Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.305648582Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=943.835µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.307950354Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.308948951Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.010307ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.310701218Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.317426548Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.72422ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.319414812Z level=info msg="Executing migration" id="create dashboard v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.320325656Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=910.274µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.322615687Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.323538733Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=922.496µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.32641934Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.327365915Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=945.915µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.3297798Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.330219531Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=448.321µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.332941144Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.334212118Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.270894ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.336864919Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.337004803Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=95.003µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.339429278Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.342001807Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.571929ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.343815676Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.345778439Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.962163ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.348173392Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.350105335Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.931923ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.352394866Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.353360862Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=965.696µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.355399797Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.35740026Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.999963ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.359247599Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.360216756Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=968.467µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.362706233Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.363686849Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=981.216µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.366190986Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.366231857Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=39.031µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.36858332Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.368624392Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=44.131µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.370595274Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.372826984Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.23236ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.374891329Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.376932194Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.040495ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.379228865Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.381300281Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.068286ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.383576812Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.38571603Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.138808ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.387896378Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.38833811Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=441.982µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.390481587Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.391405342Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=924.095µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.39393114Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.394875255Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=943.815µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.39767276Z level=info msg="Executing migration" id="Update dashboard title length"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.397699771Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=27.151µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.399790907Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.400736502Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=948.235µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.402471719Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.403311291Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=840.212µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.405696035Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.411447459Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.750674ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.413767592Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.414611954Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=844.342µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.41705137Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.417975554Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=924.484µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.42078922Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.421707645Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=917.945µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.424367166Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.424844569Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=477.053µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.426745219Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.42750263Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=732.55µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.430830949Z level=info msg="Executing migration" id="Add check_sum column"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.433060729Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.22924ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.43609196Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.437358514Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.267614ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.440222351Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.440448467Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=226.526µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.442354698Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.442572674Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=232.816µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.444346242Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.445336778Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=989.936µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.448467162Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.451242146Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.774474ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.453410935Z level=info msg="Executing migration" id="create data_source table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.454572225Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.16125ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.45697152Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.457878305Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=905.915µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.460184116Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.461112471Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=930.945µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.463910686Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.464821761Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=907.934µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.466812694Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.46780164Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=987.696µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.46964891Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.476099533Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.446153ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.478087416Z level=info msg="Executing migration" id="create data_source table v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.479170675Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.082769ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.481225511Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.482232487Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.006597ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.484413146Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.485800594Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.388348ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.489922934Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.491197837Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.276303ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.493265413Z level=info msg="Executing migration" id="Add column with_credentials"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.495942715Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.676532ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.498674618Z level=info msg="Executing migration" id="Add secure json data column"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.50133254Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.654351ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.504121114Z level=info msg="Executing migration" id="Update data_source table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.504154305Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=31.691µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.506135709Z level=info msg="Executing migration" id="Update initial version to 1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.506386505Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=251.696µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.508539493Z level=info msg="Executing migration" id="Add read_only data column"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.511135203Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.59459ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.513049623Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.513322431Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=272.308µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.51552801Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.515854919Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=334.519µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.51776128Z level=info msg="Executing migration" id="Add uid column"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.519835756Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.074486ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.522007314Z level=info msg="Executing migration" id="Update uid value"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.52221645Z level=info msg="Migration successfully executed" id="Update uid value" duration=209.877µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.524199753Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.524998675Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=798.912µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.527197183Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.527854971Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=657.618µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.529976287Z level=info msg="Executing migration" id="create api_key table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.530673587Z level=info msg="Migration successfully executed" id="create api_key table" duration=696.78µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.534000956Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.534635003Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=633.357µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.536845382Z level=info msg="Executing migration" id="add index api_key.key"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.537473469Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=627.747µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.539853913Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.540546221Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=691.988µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.543012017Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.543674045Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=661.828µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.545749301Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.546392038Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=642.467µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.548610777Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.550339764Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.733237ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.552931453Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.562097509Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.165356ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.564674658Z level=info msg="Executing migration" id="create api_key table v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.566046195Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.369756ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.568673615Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.570062843Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.389368ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.572325963Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.573739961Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.412988ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.57591108Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.577258565Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.344135ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.581122279Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.581828638Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=705.609µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.584152741Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.585322212Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.169001ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.588055585Z level=info msg="Executing migration" id="Update api_key table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.588122757Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=68.412µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.590573503Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.594916688Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.342576ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.596887742Z level=info msg="Executing migration" id="Add service account foreign key"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.6012929Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.404179ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.603750096Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.604069595Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=318.618µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.606710386Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.612020208Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=5.308433ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.614981957Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.619880278Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.897441ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.623165467Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.62479754Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.630283ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.627349068Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.628614642Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.264844ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.631288484Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.63301452Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.724626ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.635260571Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.637366858Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=2.103096ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.653993903Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.656152511Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=2.160167ms
Dec 06 09:42:57 compute-0 podman[97899]: 2025-12-06 09:42:57.623151516 +0000 UTC m=+0.031425944 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 09:42:57 compute-0 podman[97899]: 2025-12-06 09:42:57.924824495 +0000 UTC m=+0.333098863 container create a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.926381717Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.930104027Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=3.723829ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.936853018Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.937012542Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=157.544µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.940164697Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.940233938Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=61.901µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.94365511Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.948063189Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.408929ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.951203833Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.954513181Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.308468ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.96714544Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.967689495Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=547.766µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.969966245Z level=info msg="Executing migration" id="create quota table v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.971338852Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.371997ms
Dec 06 09:42:57 compute-0 systemd[1]: Started libpod-conmon-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope.
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.975412111Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.976656495Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.248554ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.979348677Z level=info msg="Executing migration" id="Update quota table charset"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.979373508Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=25.001µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.981460263Z level=info msg="Executing migration" id="create plugin_setting table"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.98244076Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=979.747µs
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.985183904Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.986235782Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.052437ms
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.991191315Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec 06 09:42:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:57.996932779Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.744154ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.001721777Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.001769729Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=48.822µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.004699787Z level=info msg="Executing migration" id="create session table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.006643298Z level=info msg="Migration successfully executed" id="create session table" duration=1.939771ms
Dec 06 09:42:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.01155712Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.01190941Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=351.96µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.014592292Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.014780667Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=186.245µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.017654494Z level=info msg="Executing migration" id="create playlist table v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.019339109Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.684655ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.022844513Z level=info msg="Executing migration" id="create playlist item table v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.024437646Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.593153ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.028788142Z level=info msg="Executing migration" id="Update playlist table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.028830383Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.551µs
Dec 06 09:42:58 compute-0 podman[97899]: 2025-12-06 09:42:58.030315724 +0000 UTC m=+0.438590132 container init a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.03130977Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.031353681Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.091µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.033812567Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:58 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 06 09:42:58 compute-0 podman[97899]: 2025-12-06 09:42:58.039357216 +0000 UTC m=+0.447631554 container start a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.040006533Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=6.191076ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.044110864Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec 06 09:42:58 compute-0 podman[97899]: 2025-12-06 09:42:58.045519851 +0000 UTC m=+0.453794279 container attach a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.047411042Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.300088ms
Dec 06 09:42:58 compute-0 serene_bhabha[97916]: 0 0
Dec 06 09:42:58 compute-0 systemd[1]: libpod-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope: Deactivated successfully.
Dec 06 09:42:58 compute-0 podman[97899]: 2025-12-06 09:42:58.04917962 +0000 UTC m=+0.457453948 container died a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.049462187Z level=info msg="Executing migration" id="drop preferences table v2"
Dec 06 09:42:58 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.049610421Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=149.944µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.051854962Z level=info msg="Executing migration" id="drop preferences table v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.051966995Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=105.702µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.054352168Z level=info msg="Executing migration" id="create preferences table v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.055278353Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=926.655µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.057736749Z level=info msg="Executing migration" id="Update preferences table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.05776355Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.281µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.060532364Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.063906004Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.3561ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.067568132Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.06784867Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=277.868µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.070734307Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.074367655Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.634188ms
Dec 06 09:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1397aec040c9da5e5432c37bf5af7406d5f723d1f473b391beb1d7118b9381ef-merged.mount: Deactivated successfully.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.076178313Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.079358978Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.180825ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.08128909Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.081429094Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=138.294µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.084096586Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.085282498Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.184072ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.088196106Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.089273295Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.078218ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.093559009Z level=info msg="Executing migration" id="create alert table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.095261935Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.702516ms
Dec 06 09:42:58 compute-0 podman[97899]: 2025-12-06 09:42:58.095930473 +0000 UTC m=+0.504204811 container remove a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e (image=quay.io/ceph/haproxy:2.3, name=serene_bhabha)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.097861825Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.098989075Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.12588ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.101371349Z level=info msg="Executing migration" id="add index alert state"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.102313135Z level=info msg="Migration successfully executed" id="add index alert state" duration=941.496µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.104735479Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.105750796Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.015107ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.109280591Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.110801442Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.530531ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.114513221Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.115392965Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=880.044µs
Dec 06 09:42:58 compute-0 systemd[1]: libpod-conmon-a170ea3136f56c65dc5ab6c0b08440e101d37dbdd5d5ad066502761e6c62b20e.scope: Deactivated successfully.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.11891776Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.119741592Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=824.082µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.121597001Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.129160354Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.561613ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.130732856Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.131330952Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=597.686µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.132976977Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.133643845Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=669.827µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.136940223Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.13720623Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=266.347µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.138850864Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.139399669Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=545.335µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.141196426Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.141859265Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=661.659µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.144013732Z level=info msg="Executing migration" id="Add column is_default"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.148369219Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.353887ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.150764023Z level=info msg="Executing migration" id="Add column frequency"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.15438794Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.626957ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.156469876Z level=info msg="Executing migration" id="Add column send_reminder"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.159384404Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.914058ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.161507312Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec 06 09:42:58 compute-0 systemd[1]: Reloading.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.223870594Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=62.350871ms
Dec 06 09:42:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.227390928Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.2285796Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.192742ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.231099308Z level=info msg="Executing migration" id="Update alert table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.231129789Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.551µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.233176713Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.233200383Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.55µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.234810017Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec 06 09:42:58 compute-0 ceph-mon[74327]: Deploying daemon haproxy.rgw.default.compute-0.vhqyer on compute-0
Dec 06 09:42:58 compute-0 ceph-mon[74327]: pgmap v82: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 8.19 scrub starts
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 8.19 scrub ok
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 7.1d scrub starts
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 7.1d scrub ok
Dec 06 09:42:58 compute-0 ceph-mon[74327]: osdmap e74: 3 total, 3 up, 3 in
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 8.1e deep-scrub starts
Dec 06 09:42:58 compute-0 ceph-mon[74327]: 8.1e deep-scrub ok
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.235765952Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=955.455µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.23938684Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.240450058Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.064758ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.243591512Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.245025711Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.436679ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.247465866Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.248270568Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=803.782µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.251664269Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.252708147Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.042898ms
Dec 06 09:42:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.257116385Z level=info msg="Executing migration" id="Add for to alert table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.261520653Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.397887ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.263903057Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec 06 09:42:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.267108693Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.203195ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.268988613Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.269163868Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=175.405µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.27109062Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.27185155Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=762.93µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.273868514Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.274687826Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=818.842µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.276220618Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.279321291Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.100523ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.282408783Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.282622129Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=219.556µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.284745646Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.285881936Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.13634ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.287649034Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.288439135Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=793.621µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.291004944Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.291085496Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=80.992µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.293592173Z level=info msg="Executing migration" id="create annotation table v5"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.294406465Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=814.162µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.296747078Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.297426556Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=678.918µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.300090147Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.300760955Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=670.978µs
Dec 06 09:42:58 compute-0 systemd-rc-local-generator[97965]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:58 compute-0 systemd-sysv-generator[97969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.303669893Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.304807664Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.137391ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.307671061Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.308451871Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=780.79µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31061592Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31137597Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=756.29µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.313544178Z level=info msg="Executing migration" id="Update annotation table charset"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.313570509Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.751µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.31507853Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.318291116Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.211286ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.320440493Z level=info msg="Executing migration" id="Drop category_id index"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.321235485Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=789.791µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.323113284Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.325963491Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.846307ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.327472942Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.328043777Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=570.225µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.32964684Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.330472782Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=827.682µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.332429155Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.333170594Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=741.479µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.334971123Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.344007745Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.031242ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.345811624Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.346498652Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=662.167µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.348071274Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.348793634Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=722.08µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.351755163Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.35202095Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=265.997µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.353972712Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.354653251Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=680.419µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.35649734Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.356668424Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=171.215µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.359388857Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.362630835Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.243518ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.364648068Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.367569196Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.923128ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.369605781Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.370377052Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=771.231µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.372350435Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.373021943Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=716.569µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.375477749Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.375687844Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=216.075µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.377449672Z level=info msg="Executing migration" id="Add epoch_end column"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.380569195Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.117843ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.382340863Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.383129284Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=787.861µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.385322763Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.385506128Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=183.715µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.388283582Z level=info msg="Executing migration" id="Move region to single row"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.388603441Z level=info msg="Migration successfully executed" id="Move region to single row" duration=320.529µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.390585033Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.391380065Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=794.032µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.393125152Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.393920594Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=798.963µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.399714849Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.400588212Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=873.613µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.40238227Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.403077909Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=695.579µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.40572268Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.406526431Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=803.031µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.408836293Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.409707686Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=871.133µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.411350091Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.411400182Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=48.451µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.413879719Z level=info msg="Executing migration" id="create test_data table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.414608168Z level=info msg="Migration successfully executed" id="create test_data table" duration=727.809µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.416668863Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.41728757Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=618.767µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.420227879Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.420896676Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=668.767µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.423385763Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.424096542Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=710.229µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.427290089Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.427507854Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=219.896µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.430526855Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.430914415Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=389.73µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.434548692Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.434606834Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=58.342µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.436342411Z level=info msg="Executing migration" id="create team table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.437111691Z level=info msg="Migration successfully executed" id="create team table" duration=768.87µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.439323691Z level=info msg="Executing migration" id="add index team.org_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.440192584Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=865.323µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.442984389Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.443984266Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=999.897µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.446290817Z level=info msg="Executing migration" id="Add column uid in team"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.449564596Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.273839ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.451589739Z level=info msg="Executing migration" id="Update uid column values in team"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.451730153Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=139.594µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.454327303Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.455031762Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=707.059µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.457140118Z level=info msg="Executing migration" id="create team member table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.457774775Z level=info msg="Migration successfully executed" id="create team member table" duration=632.077µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.460027426Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.460722585Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=694.139µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.463648853Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.470758994Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=7.10911ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.473947459Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.475229944Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.286685ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.477810253Z level=info msg="Executing migration" id="Add column email to team table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:42:58.480Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001687872s
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.48220578Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.394607ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.484213244Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.487597445Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.383781ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.490583485Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.493931665Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.34754ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.495576149Z level=info msg="Executing migration" id="create dashboard acl table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.49638166Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=805.211µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.499308609Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.500049229Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=740.1µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.502300889Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.503336577Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.035448ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.505391912Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.506632086Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.238074ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.510658753Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.511430395Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=771.022µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.513628813Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.514433095Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=804.522µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.516572462Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.517286201Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=713.249µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.519193232Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.519931572Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=737.86µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.521567086Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.521996357Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=429.191µs
Dec 06 09:42:58 compute-0 systemd[1]: Reloading.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.526401166Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.526629442Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=228.126µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.528464981Z level=info msg="Executing migration" id="create tag table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.529110578Z level=info msg="Migration successfully executed" id="create tag table" duration=645.537µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.531478862Z level=info msg="Executing migration" id="add index tag.key_value"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.532183461Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=706.709µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.534449142Z level=info msg="Executing migration" id="create login attempt table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.535092529Z level=info msg="Migration successfully executed" id="create login attempt table" duration=640.557µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.537068012Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.53777184Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=703.038µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.540435542Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.541160101Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=723.999µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.543057972Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.555370962Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.31172ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.55753007Z level=info msg="Executing migration" id="create login_attempt v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.558227619Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=696.959µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.560188242Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.561020714Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=829.842µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.563462409Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.563843829Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=381.47µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.565562376Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.566279445Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=717.069µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.56910884Z level=info msg="Executing migration" id="create user auth table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.569935073Z level=info msg="Migration successfully executed" id="create user auth table" duration=825.683µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.572621304Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.573662613Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.040899ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.576072928Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.576137169Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=64.041µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.580032564Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.587679319Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.646726ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.592779536Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.597307367Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.534921ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.599430314Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.603174424Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.74346ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.604930851Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.608719292Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.785531ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.61083981Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.611763314Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=923.764µs
Dec 06 09:42:58 compute-0 systemd-rc-local-generator[98008]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.614193209Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec 06 09:42:58 compute-0 systemd-sysv-generator[98011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.617875009Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.6783ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.619545183Z level=info msg="Executing migration" id="create server_lock table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.620298853Z level=info msg="Migration successfully executed" id="create server_lock table" duration=753.49µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.622543083Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.623377846Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=836.973µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.625361759Z level=info msg="Executing migration" id="create user auth token table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.626081419Z level=info msg="Migration successfully executed" id="create user auth token table" duration=719.75µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.628454292Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.629960432Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.50491ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.632166292Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.632915962Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=752.12µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.635245194Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.636239431Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=995.537µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.638644055Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.643524246Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.880621ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.645825458Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.646744203Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=918.384µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.649058255Z level=info msg="Executing migration" id="create cache_data table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.649772394Z level=info msg="Migration successfully executed" id="create cache_data table" duration=713.959µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.6518856Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.652596919Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=711.339µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.654665095Z level=info msg="Executing migration" id="create short_url table v1"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.655376514Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=711.589µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.658055896Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.658839186Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=783.48µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.661129858Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.661173859Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=44.501µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.662948717Z level=info msg="Executing migration" id="delete alert_definition table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.66303542Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=89.333µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.665007802Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.665706221Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=695.129µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.669130743Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.670189851Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.058768ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.673209222Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.674038435Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=828.693µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.675982546Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.676051378Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=69.482µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.678154325Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.678939425Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=785.221µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.680827076Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.681594907Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=767.291µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.68393757Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.68471445Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=776.55µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.6869483Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.687839565Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=890.885µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.689674253Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.695878249Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.197996ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.698219412Z level=info msg="Executing migration" id="drop alert_definition table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.699652971Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.433359ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.702191259Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.702260921Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=70.172µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.703810912Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.704561813Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=748.091µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.706805613Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.707607535Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=801.983µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.709657389Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.710435871Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=778.322µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.712210038Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.712259049Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=51.942µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.713904823Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.714832308Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=924.755µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.716620866Z level=info msg="Executing migration" id="create alert_instance table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.717433098Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=811.212µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.719117423Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.719936315Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=818.562µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.72198566Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.723159231Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.171151ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.72610445Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.730864798Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.759988ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.732655596Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.733576611Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=920.125µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.735982385Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.737000032Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.019377ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.738977736Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.762441064Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.415607ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.764947802Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.787464055Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.469212ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.789944072Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.791109073Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.164661ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.793091436Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.794034392Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=941.946µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.796807707Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.801562414Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.754326ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.803539507Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.807534873Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.995106ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.809909787Z level=info msg="Executing migration" id="create alert_rule table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.810691478Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=781.131µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.813537434Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.814597873Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.060159ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.817430209Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.818400895Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=970.536µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.821554989Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.822470485Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=914.906µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.824839298Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.824924731Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=85.243µs
Dec 06 09:42:58 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.vhqyer for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.826841871Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.831167888Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.326507ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.833019507Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.838129994Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.107107ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.84060066Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.845624135Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.017405ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.848919683Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.84991243Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=993.897µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.85178519Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.852724345Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=939.935µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.854472532Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.858814969Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.342077ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.861575333Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.868038046Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.454793ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.870631986Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.871876819Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.244953ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.8745324Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.880255913Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.722643ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.882405281Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.886859641Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.45427ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.888707Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.888793252Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=86.782µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.891381082Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.89239671Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.015788ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.895080322Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.896012706Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=932.224µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.898119283Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.899082129Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=960.125µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.9013789Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.901423642Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.051µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.903216139Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.907859204Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.643275ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.910116964Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.914741389Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.622335ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.916594528Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.9226625Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.063702ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.924645584Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.929041762Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.396798ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.93084082Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.935866045Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.021265ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.938413823Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.938471165Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=57.882µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.940459848Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.941151667Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=694.249µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.943274674Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.94801103Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.733506ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.950385304Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.950459896Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=78.272µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.952266025Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.957912846Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.645101ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.959764356Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.96068465Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=920.054µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.963565668Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.969054035Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.487047ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.970969196Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.971682325Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=712.609µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.974334667Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.975581739Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.249363ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.977865621Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.984161309Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.293068ms
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.986552404Z level=info msg="Executing migration" id="create provenance_type table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.987386037Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=833.243µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.989861602Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.990783618Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=921.156µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.993148381Z level=info msg="Executing migration" id="create alert_image table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.993964173Z level=info msg="Migration successfully executed" id="create alert_image table" duration=817.792µs
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.996634874Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec 06 09:42:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.997361164Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=726.2µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.999796299Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:58.99984309Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.451µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.001797123Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.002619525Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=822.342µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.005070531Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.006310884Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.240632ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.008324408Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.008744559Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.010608009Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.011066371Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=456.062µs
Dec 06 09:42:59 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.012629694Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.013534468Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=903.944µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.015371737Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.020904385Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.532148ms
Dec 06 09:42:59 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.022720754Z level=info msg="Executing migration" id="create library_element table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.023865935Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.144961ms
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.026805084Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.027916663Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.111609ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.030295077Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.03117137Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=875.893µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.033744419Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.035274421Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.529442ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.037921482Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.03897535Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.051608ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.041064926Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.041166929Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=104.293µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.042927645Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.043035198Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=105.223µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.044907759Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.045267778Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.329µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.047078297Z level=info msg="Executing migration" id="create data_keys table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.048179836Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.101779ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.051279739Z level=info msg="Executing migration" id="create secrets table"
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.051998799Z level=info msg="Migration successfully executed" id="create secrets table" duration=721.27µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.054161356Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.082802405Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.633809ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.085137187Z level=info msg="Executing migration" id="add name column into data_keys"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.09194744Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.808203ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.094253942Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.094473968Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=220.186µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.096277706Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec 06 09:42:59 compute-0 podman[98065]: 2025-12-06 09:42:59.114645789 +0000 UTC m=+0.052768216 container create 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.128468149Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.182723ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.130246167Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.16204448Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.792783ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.163969362Z level=info msg="Executing migration" id="create kv_store table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.164861775Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=891.623µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.167128196Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.168065601Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=937.135µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.170130557Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.170367183Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=237.576µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.171938734Z level=info msg="Executing migration" id="create permission table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.173058935Z level=info msg="Migration successfully executed" id="create permission table" duration=1.119781ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:42:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b58fcfff812beea46e10342d748115dd64bff4593d725f7ba67cb37c86b189/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.176207949Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.177113273Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=907.514µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.179814046Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.180920006Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.11014ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.183255198Z level=info msg="Executing migration" id="create role table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.184169353Z level=info msg="Migration successfully executed" id="create role table" duration=914.685µs
Dec 06 09:42:59 compute-0 podman[98065]: 2025-12-06 09:42:59.092554026 +0000 UTC m=+0.030676433 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.186608169Z level=info msg="Executing migration" id="add column display_name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.192952138Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.340619ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.195100906Z level=info msg="Executing migration" id="add column group_name"
Dec 06 09:42:59 compute-0 podman[98065]: 2025-12-06 09:42:59.200274905 +0000 UTC m=+0.138397392 container init 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.200302306Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.19997ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.20233508Z level=info msg="Executing migration" id="add index role.org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.203381678Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.046778ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.206170443Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.207275372Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.105109ms
Dec 06 09:42:59 compute-0 podman[98065]: 2025-12-06 09:42:59.209349958 +0000 UTC m=+0.147472375 container start 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.210616522Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.211532497Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=913.385µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.214067624Z level=info msg="Executing migration" id="create team role table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.214795594Z level=info msg="Migration successfully executed" id="create team role table" duration=727.87µs
Dec 06 09:42:59 compute-0 bash[98065]: 8307d569d32f641dfd216329bf28a6dd6c231023fe8a6bc71cdd2d75ff9fd46f
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.217017104Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.218050341Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.033107ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.220325233Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.221230147Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=904.114µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.223684182Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.224530445Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=845.893µs
Dec 06 09:42:59 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.vhqyer for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.22733554Z level=info msg="Executing migration" id="create user role table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-rgw-default-compute-0-vhqyer[98080]: [NOTICE] 339/094259 (2) : New worker #1 (4) forked
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.228220855Z level=info msg="Migration successfully executed" id="create user role table" duration=885.435µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.23071082Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.232470998Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.762458ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.235705725Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.237336789Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.630575ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.240290188Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.242060785Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.772097ms
Dec 06 09:42:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.244817599Z level=info msg="Executing migration" id="create builtin role table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.245876348Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.060639ms
Dec 06 09:42:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.004000106s ======
Dec 06 09:42:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:42:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.248666112Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.249623568Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=957.806µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.251629382Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.252456254Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=826.712µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.255134586Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.261094355Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.960049ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.262802711Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.263664634Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=859.473µs
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 9.a scrub starts
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 9.a scrub ok
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 7.1f scrub starts
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 7.1f scrub ok
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 9.1f scrub starts
Dec 06 09:42:59 compute-0 ceph-mon[74327]: 9.1f scrub ok
Dec 06 09:42:59 compute-0 ceph-mon[74327]: osdmap e75: 3 total, 3 up, 3 in
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.266029748Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.266913721Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=883.403µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.269331456Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.27021817Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=886.854µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.271845524Z level=info msg="Executing migration" id="add unique index role.uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.272661935Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=816.271µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.274997198Z level=info msg="Executing migration" id="create seed assignment table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.275694717Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=696.628µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.277574577Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.27841982Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=845.173µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.286400984Z level=info msg="Executing migration" id="add column hidden to role table"
Dec 06 09:42:59 compute-0 sudo[97832]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.29260137Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.196636ms
Dec 06 09:42:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.294581873Z level=info msg="Executing migration" id="permission kind migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.30042309Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.840517ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.302147636Z level=info msg="Executing migration" id="permission attribute migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.308084085Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.935049ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.309832253Z level=info msg="Executing migration" id="permission identifier migration"
Dec 06 09:42:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.315806573Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.97375ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.317820846Z level=info msg="Executing migration" id="add permission identifier index"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.31870364Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=882.854µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.322135862Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.323640213Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.503811ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.32577804Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec 06 09:42:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.326690485Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=912.505µs
Dec 06 09:42:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.328635416Z level=info msg="Executing migration" id="create query_history table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.329436158Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=804.822µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.331579835Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.332453109Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=870.634µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.334478753Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.334614926Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=136.813µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.336181989Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.336255561Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.982µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.337771642Z level=info msg="Executing migration" id="teams permissions migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.338239124Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=467.752µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.33995988Z level=info msg="Executing migration" id="dashboard permissions"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.340413763Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=454.713µs
Dec 06 09:42:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.342139298Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.342726504Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=587.156µs
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec 06 09:42:59 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.345526999Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.345747705Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=222.746µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.34740795Z level=info msg="Executing migration" id="alerting notification permissions"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.347940484Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=532.004µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.350048771Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.350937394Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=888.213µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.352913797Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.353811111Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=897.044µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.355673151Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.361330923Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.657282ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.363393459Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.363501072Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=108.203µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.365136085Z level=info msg="Executing migration" id="create correlation table v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.366244515Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.10791ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.368743542Z level=info msg="Executing migration" id="add index correlations.uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.370061977Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.319535ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.372424281Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.373744456Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.319845ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.37614828Z level=info msg="Executing migration" id="add correlation config column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.385384939Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.235978ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.387303409Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.388307157Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=999.078µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.390013502Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.390957918Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=944.686µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.393002622Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.413646306Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.632344ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.41601305Z level=info msg="Executing migration" id="create correlation v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.417423857Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.402777ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.419536014Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.420777297Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.241263ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.423621843Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.424664922Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.040819ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.427095666Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.427993811Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=897.975µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.430325494Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.430598131Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=273.117µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.432170252Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.433031326Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=861.104µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.434857335Z level=info msg="Executing migration" id="add provisioning column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.440760433Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.899878ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.442952282Z level=info msg="Executing migration" id="create entity_events table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.443866366Z level=info msg="Migration successfully executed" id="create entity_events table" duration=917.975µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.445632394Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.446542978Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=909.985µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.449147068Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.449686483Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.451643795Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.452014054Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.453787392Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.454913982Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.1267ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.457032679Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.458119269Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.08502ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.460679267Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.461931431Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.252284ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.464188651Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.465540437Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.352016ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.46788469Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.469102943Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.237412ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.471119087Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.472309889Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.190942ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.474084006Z level=info msg="Executing migration" id="Drop public config table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.475368941Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.285555ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.477618772Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.478866375Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.247923ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.480751236Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.481863815Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.112599ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.483610832Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.484769333Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.160151ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.486706485Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.487946598Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.240783ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.490445935Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.513469032Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.993716ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.516343009Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.52528717Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.944861ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.527810067Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.535845152Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.009725ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.537862257Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.538169005Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=307.998µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.539997954Z level=info msg="Executing migration" id="add share column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.547735232Z level=info msg="Migration successfully executed" id="add share column" duration=7.729058ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.550153546Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.550554357Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=402.041µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.552263193Z level=info msg="Executing migration" id="create file table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.553436355Z level=info msg="Migration successfully executed" id="create file table" duration=1.170872ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.557077572Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.558603882Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.52891ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.561248453Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.562649651Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.403218ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.565140508Z level=info msg="Executing migration" id="create file_meta table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.566189326Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.049368ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.568526629Z level=info msg="Executing migration" id="file table idx: path key"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.569945216Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.419017ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.572062573Z level=info msg="Executing migration" id="set path collation in file table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.572205167Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=144.544µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.574360425Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.57452655Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=167.385µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.576348719Z level=info msg="Executing migration" id="managed permissions migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.577054748Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=703.008µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.57937809Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.579828172Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=455.382µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.581796025Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.583322905Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.52723ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.585511454Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.593625242Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.107358ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.595952965Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.596300344Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=349.68µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.598262877Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.599843818Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.580981ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.602560151Z level=info msg="Executing migration" id="update group index for alert rules"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.603151907Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=592.556µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.605525491Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.605920441Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=468.312µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.608656735Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.609414546Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=760.352µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.611794199Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.620714349Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.902989ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.623404821Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.631750075Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.340053ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.633891071Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.635134605Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.243584ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.636988634Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.720094973Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=83.098199ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.72336375Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.72445706Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.09442ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.726417072Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.727329257Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=911.835µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.730132442Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.753853298Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=23.715386ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.757843306Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.765573462Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.731927ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.767967876Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.768343176Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=375.8µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.770041492Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.770278639Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=237.137µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.771865942Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.772106438Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=240.926µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.77406474Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.774347797Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=283.317µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.776124185Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.776387592Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=260.657µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.778305714Z level=info msg="Executing migration" id="create folder table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.779225488Z level=info msg="Migration successfully executed" id="create folder table" duration=920.174µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.780982816Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.782026213Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.043127ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.784423958Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.785356733Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=933.144µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.787604793Z level=info msg="Executing migration" id="Update folder title length"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.787664425Z level=info msg="Migration successfully executed" id="Update folder title length" duration=60.372µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.789250648Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.790244174Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=996.157µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.792560676Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.793874941Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.315025ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.795752922Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.79684552Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.091608ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.798938777Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.799412549Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=473.902µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.80092198Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.801174677Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=252.927µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.802970226Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.803991273Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.020908ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.805971326Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.806994553Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.023077ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.808517184Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.809404278Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=887.354µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.811185166Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.812167702Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=981.756µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.813908529Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.814895415Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=987.346µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.816686143Z level=info msg="Executing migration" id="create anon_device table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.81769831Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.011967ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.819663783Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.820913716Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.249723ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.823820764Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.824734768Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=913.544µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.827086092Z level=info msg="Executing migration" id="create signing_key table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.828028457Z level=info msg="Migration successfully executed" id="create signing_key table" duration=944.425µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.830886614Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.831777508Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=892.404µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.834134301Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.835173199Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.039007ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.836886945Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.837133411Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=247.206µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.83967395Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.846227305Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.552475ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.848393783Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.849053271Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=659.688µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.851178998Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.852157444Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=975.586µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.854305882Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.855761101Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.417218ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.857564799Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.858627338Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.060989ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.860667922Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.861715751Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.047529ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.863718874Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.86465876Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=940.156µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.866391365Z level=info msg="Executing migration" id="create sso_setting table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.867329701Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=938.616µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.869605761Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.87026493Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=659.589µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.872114219Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.872355845Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=241.966µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.874268247Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.874357859Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=90.252µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.876749274Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.883297029Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.546986ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.885252511Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.892154577Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.902316ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.893892073Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.894254603Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=362.69µs
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=migrator t=2025-12-06T09:42:59.896099262Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.798020745s
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore t=2025-12-06T09:42:59.897254273Z level=info msg="Created default organization"
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=secrets t=2025-12-06T09:42:59.899418391Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugin.store t=2025-12-06T09:42:59.917468316Z level=info msg="Loading plugins..."
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=local.finder t=2025-12-06T09:42:59.993614757Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugin.store t=2025-12-06T09:42:59.993737981Z level=info msg="Plugins loaded" count=55 duration=76.270195ms
Dec 06 09:42:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=query_data t=2025-12-06T09:42:59.996431233Z level=info msg="Query Service initialization"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=live.push_http t=2025-12-06T09:43:00.00978379Z level=info msg="Live Push Gateway initialization"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.013936102Z level=info msg=Starting
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.014668581Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration orgID=1 t=2025-12-06T09:43:00.015390931Z level=info msg="Migrating alerts for organisation"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration orgID=1 t=2025-12-06T09:43:00.016744487Z level=info msg="Alerts found to migrate" alerts=0
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.migration t=2025-12-06T09:43:00.019956383Z level=info msg="Completed alerting migration"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.055274721Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 06 09:43:00 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=infra.usagestats.collector t=2025-12-06T09:43:00.058940349Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.datasources t=2025-12-06T09:43:00.06120423Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec 06 09:43:00 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.alerting t=2025-12-06T09:43:00.07838636Z level=info msg="starting to provision alerting"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.alerting t=2025-12-06T09:43:00.078416071Z level=info msg="finished to provision alerting"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafanaStorageLogger t=2025-12-06T09:43:00.078647107Z level=info msg="Storage starting"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.079259293Z level=info msg="Warming state cache for startup"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.multiorg.alertmanager t=2025-12-06T09:43:00.081109203Z level=info msg="Starting MultiOrg Alertmanager"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=http.server t=2025-12-06T09:43:00.085730677Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=http.server t=2025-12-06T09:43:00.086271172Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.091083671Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.102466036Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.state.manager t=2025-12-06T09:43:00.124714623Z level=info msg="State cache has been initialized" states=0 duration=45.45078ms
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ngalert.scheduler t=2025-12-06T09:43:00.124781575Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ticker t=2025-12-06T09:43:00.124864357Z level=info msg=starting first_tick=2025-12-06T09:43:10Z
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.dashboard t=2025-12-06T09:43:00.14328442Z level=info msg="starting to provision dashboards"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=plugins.update.checker t=2025-12-06T09:43:00.202660192Z level=info msg="Update check succeeded" duration=119.973726ms
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana.update.checker t=2025-12-06T09:43:00.206254509Z level=info msg="Update check succeeded" duration=125.855205ms
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=sqlstore.transactions t=2025-12-06T09:43:00.218804785Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 06 09:43:00 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 26 completed events
Dec 06 09:43:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:43:00 compute-0 ceph-mon[74327]: pgmap v85: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.12 scrub starts
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.12 scrub ok
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.9 scrub starts
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.9 scrub ok
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.1d scrub starts
Dec 06 09:43:00 compute-0 ceph-mon[74327]: 8.1d scrub ok
Dec 06 09:43:00 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:00 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:00 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:00 compute-0 ceph-mon[74327]: Deploying daemon haproxy.rgw.default.compute-2.mwbfro on compute-2
Dec 06 09:43:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:00.353347223Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:00.354184165Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=provisioning.dashboard t=2025-12-06T09:43:00.372278071Z level=info msg="finished to provision dashboards"
Dec 06 09:43:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 45 op/s; 106 B/s, 5 objects/s recovering
Dec 06 09:43:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 06 09:43:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 09:43:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 06 09:43:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 09:43:01 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 06 09:43:01 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 06 09:43:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:43:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:43:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:02 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 06 09:43:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 06 09:43:02 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 9.b scrub starts
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 8.4 deep-scrub starts
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 9.b scrub ok
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 8.4 deep-scrub ok
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 9.1c scrub starts
Dec 06 09:43:02 compute-0 ceph-mon[74327]: 9.1c scrub ok
Dec 06 09:43:02 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:02 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 09:43:02 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 09:43:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 09:43:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 09:43:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 06 09:43:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 44 op/s; 104 B/s, 5 objects/s recovering
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 09:43:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:03.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:03 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 06 09:43:03 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec 06 09:43:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec 06 09:43:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec 06 09:43:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:03 compute-0 sudo[98100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:03 compute-0 sudo[98100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:03 compute-0 sudo[98100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:03 compute-0 sudo[98125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:03 compute-0 sudo[98125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:03 compute-0 podman[98194]: 2025-12-06 09:43:03.758241522 +0000 UTC m=+0.027501769 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 09:43:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9230003870 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:04 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 06 09:43:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v89: 337 pgs: 4 unknown, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 06 09:43:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:05.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:05.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.e( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332633018s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 50'39 active pruub 216.538146973s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.e( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332432747s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 216.538146973s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.6( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332157135s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 50'39 active pruub 216.538192749s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 76 pg[6.6( v 50'39 (0'0,50'39] local-lis/les=62/63 n=1 ec=54/21 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=8.332110405s) [0] r=-1 lpr=76 pi=[62,76)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 216.538192749s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:05 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 06 09:43:05 compute-0 podman[98194]: 2025-12-06 09:43:05.894884632 +0000 UTC m=+2.164144849 container create 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=keepalived for Ceph)
Dec 06 09:43:05 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: pgmap v86: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 45 op/s; 106 B/s, 5 objects/s recovering
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.11 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.11 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.17 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.17 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.13 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.13 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 9.10 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.a scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 9.4 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 9.4 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 9.10 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.a scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 09:43:05 compute-0 ceph-mon[74327]: osdmap e76: 3 total, 3 up, 3 in
Dec 06 09:43:05 compute-0 ceph-mon[74327]: pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 44 op/s; 104 B/s, 5 objects/s recovering
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 7.5 deep-scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 7.5 deep-scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.10 scrub starts
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 8.10 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:05 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:05 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:05 compute-0 ceph-mon[74327]: Deploying daemon keepalived.rgw.default.compute-0.mycoxk on compute-0
Dec 06 09:43:05 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 06 09:43:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 09:43:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 09:43:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 06 09:43:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 06 09:43:05 compute-0 systemd[1]: Started libpod-conmon-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope.
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:05 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=77) [1] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:05 compute-0 podman[98194]: 2025-12-06 09:43:05.999092946 +0000 UTC m=+2.268353193 container init 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Dec 06 09:43:06 compute-0 podman[98194]: 2025-12-06 09:43:06.01373983 +0000 UTC m=+2.283000057 container start 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, vcs-type=git, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793)
Dec 06 09:43:06 compute-0 podman[98194]: 2025-12-06 09:43:06.017639074 +0000 UTC m=+2.286899331 container attach 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=)
Dec 06 09:43:06 compute-0 wizardly_brahmagupta[98213]: 0 0
Dec 06 09:43:06 compute-0 systemd[1]: libpod-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope: Deactivated successfully.
Dec 06 09:43:06 compute-0 podman[98194]: 2025-12-06 09:43:06.020199863 +0000 UTC m=+2.289460090 container died 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 06 09:43:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f93fe5beea4581dd2195d9d8fb382a78eb78182bf2aaf7926f27d05b24685476-merged.mount: Deactivated successfully.
Dec 06 09:43:06 compute-0 podman[98194]: 2025-12-06 09:43:06.070637275 +0000 UTC m=+2.339897502 container remove 97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_brahmagupta, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container)
Dec 06 09:43:06 compute-0 systemd[1]: libpod-conmon-97cac1bf2414976eea5b5c6cd6aa0b5c55ff90eab7f5d173223699dfcbdee8ae.scope: Deactivated successfully.
Dec 06 09:43:06 compute-0 systemd[1]: Reloading.
Dec 06 09:43:06 compute-0 systemd-rc-local-generator[98262]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:06 compute-0 systemd-sysv-generator[98266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:06 compute-0 systemd[1]: Reloading.
Dec 06 09:43:06 compute-0 systemd-rc-local-generator[98302]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:06 compute-0 systemd-sysv-generator[98307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:06 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.mycoxk for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.19 scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.19 scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.1f scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.1f scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.f scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.1e scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.1e scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.5 scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.5 scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: pgmap v89: 337 pgs: 4 unknown, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 0 objects/s recovering
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 7.1b scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 7.1b scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.18 scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 9.f scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.18 scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 09:43:06 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 09:43:06 compute-0 ceph-mon[74327]: osdmap e77: 3 total, 3 up, 3 in
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.b scrub starts
Dec 06 09:43:06 compute-0 ceph-mon[74327]: 8.b scrub ok
Dec 06 09:43:06 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 06 09:43:06 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 06 09:43:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 06 09:43:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 06 09:43:06 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:06 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:07.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:07 compute-0 podman[98359]: 2025-12-06 09:43:07.080142983 +0000 UTC m=+0.054269775 container create 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, architecture=x86_64, io.openshift.expose-services=, description=keepalived for Ceph)
Dec 06 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4084d4ffe5a66ebdaa93523f2bb714525829da8d3d9e0aaaea12ffcc4dfb0c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:07 compute-0 podman[98359]: 2025-12-06 09:43:07.059136471 +0000 UTC m=+0.033263233 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 09:43:07 compute-0 podman[98359]: 2025-12-06 09:43:07.159399079 +0000 UTC m=+0.133525911 container init 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, vcs-type=git, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:43:07 compute-0 podman[98359]: 2025-12-06 09:43:07.164805784 +0000 UTC m=+0.138932566 container start 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, release=1793, architecture=x86_64)
Dec 06 09:43:07 compute-0 bash[98359]: 2a2c7e80a0d1eda405007bea3b6eab51637a7245fe52791289026e5bfa50f99c
Dec 06 09:43:07 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.mycoxk for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Starting VRRP child process, pid=4
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: Startup complete
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:43:07 2025: (VI_0) Entering BACKUP STATE
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: (VI_0) Entering BACKUP STATE (init)
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:07 2025: VRRP_Script(check_backend) succeeded
Dec 06 09:43:07 compute-0 sudo[98125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:43:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec 06 09:43:07 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec 06 09:43:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf[96493]: Sat Dec  6 09:43:07 2025: (VI_0) Entering MASTER STATE
Dec 06 09:43:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 06 09:43:07 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 06 09:43:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 7.6 scrub starts
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 7.6 scrub ok
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 8.1b scrub starts
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 8.1b scrub ok
Dec 06 09:43:07 compute-0 ceph-mon[74327]: osdmap e78: 3 total, 3 up, 3 in
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 7.11 scrub starts
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 7.11 scrub ok
Dec 06 09:43:07 compute-0 ceph-mon[74327]: pgmap v92: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mon[74327]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 09:43:07 compute-0 ceph-mon[74327]: Deploying daemon keepalived.rgw.default.compute-2.yurwwh on compute-2
Dec 06 09:43:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 06 09:43:07 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 06 09:43:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c0030a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:08 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec 06 09:43:08 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec 06 09:43:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 06 09:43:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 06 09:43:08 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 7.2 scrub starts
Dec 06 09:43:09 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 80 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 7.2 scrub ok
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 9.6 scrub starts
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 9.6 scrub ok
Dec 06 09:43:09 compute-0 ceph-mon[74327]: osdmap e79: 3 total, 3 up, 3 in
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 7.a scrub starts
Dec 06 09:43:09 compute-0 ceph-mon[74327]: 7.a scrub ok
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:09.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:43:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:43:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:43:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 8b961573-5d9a-4966-9430-80966b578f70 (Updating ingress.rgw.default deployment (+4 -> 4)) in 12 seconds
Dec 06 09:43:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec 06 09:43:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: [progress INFO root] update: starting ev 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1))
Dec 06 09:43:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec 06 09:43:09 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec 06 09:43:09 compute-0 sshd-session[98384]: Accepted publickey for zuul from 192.168.122.30 port 34942 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:43:09 compute-0 systemd-logind[795]: New session 37 of user zuul.
Dec 06 09:43:09 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 06 09:43:09 compute-0 sudo[98386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:09 compute-0 sshd-session[98384]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:43:09 compute-0 sudo[98386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:09 compute-0 sudo[98386]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:09 compute-0 sudo[98413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:09 compute-0 sudo[98413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:10 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.d scrub starts
Dec 06 09:43:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 7.3 deep-scrub starts
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 7.3 deep-scrub ok
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 12.15 scrub starts
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 12.15 scrub ok
Dec 06 09:43:10 compute-0 ceph-mon[74327]: osdmap e80: 3 total, 3 up, 3 in
Dec 06 09:43:10 compute-0 ceph-mon[74327]: pgmap v95: 337 pgs: 2 active+clean+scrubbing, 4 unknown, 331 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:10 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 8.6 scrub starts
Dec 06 09:43:10 compute-0 ceph-mon[74327]: 8.6 scrub ok
Dec 06 09:43:10 compute-0 ceph-mon[74327]: Deploying daemon prometheus.compute-0 on compute-0
Dec 06 09:43:10 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.d scrub ok
Dec 06 09:43:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 06 09:43:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 06 09:43:10 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.16( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=4 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:10 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.6( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=6 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:10 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:10 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 81 pg[10.e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=78/68 les/c/f=79/69/0 sis=80) [1] r=0 lpr=80 pi=[68,80)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:10 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 27 completed events
Dec 06 09:43:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:43:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:10 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event ab5dd157-2fbc-4f6e-89d0-89ada306a67b (Global Recovery Event) in 20 seconds
Dec 06 09:43:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:10 compute-0 python3.9[98645]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:43:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-rgw-default-compute-0-mycoxk[98374]: Sat Dec  6 09:43:10 2025: (VI_0) Entering MASTER STATE
Dec 06 09:43:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:10 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec 06 09:43:11 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec 06 09:43:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 66 op/s; 312 B/s, 16 objects/s recovering
Dec 06 09:43:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 06 09:43:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 09:43:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 06 09:43:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 7.4 scrub starts
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 7.4 scrub ok
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 12.d scrub starts
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 12.d scrub ok
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 9.8 deep-scrub starts
Dec 06 09:43:11 compute-0 ceph-mon[74327]: 9.8 deep-scrub ok
Dec 06 09:43:11 compute-0 ceph-mon[74327]: osdmap e81: 3 total, 3 up, 3 in
Dec 06 09:43:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 09:43:11 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 09:43:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:11.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 06 09:43:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 09:43:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 09:43:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 06 09:43:11 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 06 09:43:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[6.8( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:11 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 82 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:12 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec 06 09:43:12 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec 06 09:43:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 7.e scrub starts
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 7.e scrub ok
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 12.5 scrub starts
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 12.5 scrub ok
Dec 06 09:43:12 compute-0 ceph-mon[74327]: pgmap v97: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 66 op/s; 312 B/s, 16 objects/s recovering
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 9.5 scrub starts
Dec 06 09:43:12 compute-0 ceph-mon[74327]: 9.5 scrub ok
Dec 06 09:43:12 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 09:43:12 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 09:43:12 compute-0 ceph-mon[74327]: osdmap e82: 3 total, 3 up, 3 in
Dec 06 09:43:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 06 09:43:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:12 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=83) [1]/[2] r=-1 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:12 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 83 pg[6.8( v 50'39 (0'0,50'39] local-lis/les=82/83 n=0 ec=54/21 lis/c=54/54 les/c/f=55/55/0 sis=82) [1] r=0 lpr=82 pi=[54,82)/1 crt=50'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:12 compute-0 sudo[98975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onkqjvcwnvtufbczjsujqpltfwohrzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014191.9385624-56-234296251912038/AnsiballZ_command.py'
Dec 06 09:43:12 compute-0 sudo[98975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:12 compute-0 python3.9[98977]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:43:12 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec 06 09:43:13 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec 06 09:43:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 66 op/s; 315 B/s, 16 objects/s recovering
Dec 06 09:43:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 06 09:43:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 09:43:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 06 09:43:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 09:43:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 7.f scrub starts
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 7.f scrub ok
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 9.3 scrub starts
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 9.3 scrub ok
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 12.0 scrub starts
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 12.0 scrub ok
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 7.8 scrub starts
Dec 06 09:43:13 compute-0 ceph-mon[74327]: 7.8 scrub ok
Dec 06 09:43:13 compute-0 ceph-mon[74327]: osdmap e83: 3 total, 3 up, 3 in
Dec 06 09:43:13 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 09:43:13 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.261314882 +0000 UTC m=+3.296148292 volume create d2cfd66e88c0603ca7839a87101328dec9ac72785f536bb929e344840b6b9a1d
Dec 06 09:43:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.270331532 +0000 UTC m=+3.305164942 container create 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.247249457 +0000 UTC m=+3.282082887 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 06 09:43:13 compute-0 systemd[1]: Started libpod-conmon-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope.
Dec 06 09:43:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be2ea4f9e55598e22892264736e544d463b2f026eba43a357ac8056878ac7f7/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 06 09:43:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 09:43:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 09:43:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.412077042 +0000 UTC m=+3.446910472 container init 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.425825419 +0000 UTC m=+3.460658839 container start 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 pensive_taussig[99118]: 65534 65534
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.43075447 +0000 UTC m=+3.465587880 container attach 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 systemd[1]: libpod-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope: Deactivated successfully.
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.433221841 +0000 UTC m=+3.468055271 container died 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8be2ea4f9e55598e22892264736e544d463b2f026eba43a357ac8056878ac7f7-merged.mount: Deactivated successfully.
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.488213608 +0000 UTC m=+3.523047018 container remove 02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd (image=quay.io/prometheus/prometheus:v2.51.0, name=pensive_taussig, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 podman[98534]: 2025-12-06 09:43:13.492363087 +0000 UTC m=+3.527196497 volume remove d2cfd66e88c0603ca7839a87101328dec9ac72785f536bb929e344840b6b9a1d
Dec 06 09:43:13 compute-0 systemd[1]: libpod-conmon-02bc224714446a4369b848a08ed8ff6ca6312db5db57d772ccbce9b91d3c37bd.scope: Deactivated successfully.
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.568712871 +0000 UTC m=+0.041667313 volume create 092d808e2d3d866ade41ca0fff0584cbdcc946bbcbc37d6e9af37621503982e7
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.583397684 +0000 UTC m=+0.056352106 container create a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 systemd[1]: Started libpod-conmon-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope.
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.553634265 +0000 UTC m=+0.026588707 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 06 09:43:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a48a945713c1757c1192bfe2eaa73ff3f7feaf7af35dce3ff8af657c3ec64f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.668400386 +0000 UTC m=+0.141354848 container init a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.677465007 +0000 UTC m=+0.150419439 container start a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 dreamy_payne[99150]: 65534 65534
Dec 06 09:43:13 compute-0 systemd[1]: libpod-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope: Deactivated successfully.
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.681632798 +0000 UTC m=+0.154587280 container attach a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.682158803 +0000 UTC m=+0.155113235 container died a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a48a945713c1757c1192bfe2eaa73ff3f7feaf7af35dce3ff8af657c3ec64f-merged.mount: Deactivated successfully.
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.727068749 +0000 UTC m=+0.200023181 container remove a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369 (image=quay.io/prometheus/prometheus:v2.51.0, name=dreamy_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:13 compute-0 podman[99134]: 2025-12-06 09:43:13.733424512 +0000 UTC m=+0.206378944 volume remove 092d808e2d3d866ade41ca0fff0584cbdcc946bbcbc37d6e9af37621503982e7
Dec 06 09:43:13 compute-0 systemd[1]: libpod-conmon-a33ed0ed7cfe5c68dfdf07c1d835613e1f84c930e23c7e692c41cb901812b369.scope: Deactivated successfully.
Dec 06 09:43:13 compute-0 systemd[1]: Reloading.
Dec 06 09:43:13 compute-0 systemd-sysv-generator[99198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:13 compute-0 systemd-rc-local-generator[99189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:14 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1b deep-scrub starts
Dec 06 09:43:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.920730591s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 227.311737061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.919921875s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.311737061s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.915806770s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 227.307769775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 84 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=84 pruub=10.915764809s) [0] r=-1 lpr=84 pi=[58,84)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.307769775s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:14 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1b deep-scrub ok
Dec 06 09:43:14 compute-0 systemd[1]: Reloading.
Dec 06 09:43:14 compute-0 systemd-rc-local-generator[99234]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:14 compute-0 systemd-sysv-generator[99237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 12.1f scrub starts
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 12.1f scrub ok
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 9.17 scrub starts
Dec 06 09:43:14 compute-0 ceph-mon[74327]: pgmap v100: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 66 op/s; 315 B/s, 16 objects/s recovering
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 9.17 scrub ok
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 7.b scrub starts
Dec 06 09:43:14 compute-0 ceph-mon[74327]: 7.b scrub ok
Dec 06 09:43:14 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 09:43:14 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 09:43:14 compute-0 ceph-mon[74327]: osdmap e84: 3 total, 3 up, 3 in
Dec 06 09:43:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 06 09:43:14 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:14 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 85 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.510307) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194510442, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7741, "num_deletes": 251, "total_data_size": 15010882, "memory_usage": 15756392, "flush_reason": "Manual Compaction"}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 06 09:43:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:14 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194673234, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12985715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7878, "table_properties": {"data_size": 12957477, "index_size": 18011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 87831, "raw_average_key_size": 24, "raw_value_size": 12887823, "raw_average_value_size": 3544, "num_data_blocks": 798, "num_entries": 3636, "num_filter_entries": 3636, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013863, "oldest_key_time": 1765013863, "file_creation_time": 1765014194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 162970 microseconds, and 23958 cpu microseconds.
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.673291) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12985715 bytes OK
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.673313) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678578) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678629) EVENT_LOG_v1 {"time_micros": 1765014194678618, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.678664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14975896, prev total WAL file size 14975896, number of live WAL files 2.
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.683083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194683304, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 13046145, "oldest_snapshot_seqno": -1}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3453 keys, 12999783 bytes, temperature: kUnknown
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194881597, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12999783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12972032, "index_size": 18041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 85976, "raw_average_key_size": 24, "raw_value_size": 12903994, "raw_average_value_size": 3737, "num_data_blocks": 801, "num_entries": 3453, "num_filter_entries": 3453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.882025) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12999783 bytes
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.888520) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.8 rd, 65.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.4, 0.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3745, records dropped: 292 output_compression: NoCompression
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.888587) EVENT_LOG_v1 {"time_micros": 1765014194888558, "job": 4, "event": "compaction_finished", "compaction_time_micros": 198418, "compaction_time_cpu_micros": 38953, "output_level": 6, "num_output_files": 1, "total_output_size": 12999783, "num_input_records": 3745, "num_output_records": 3453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194892970, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194893140, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014194893257, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 06 09:43:14 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:14.682852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:14 compute-0 podman[99297]: 2025-12-06 09:43:14.918424368 +0000 UTC m=+0.049865189 container create cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:14 compute-0 podman[99297]: 2025-12-06 09:43:14.895149027 +0000 UTC m=+0.026589898 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec 06 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dad53dfb2f070967372d35b973ddc922d0cc08f86e02ac04dcdb3044413b5e/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dad53dfb2f070967372d35b973ddc922d0cc08f86e02ac04dcdb3044413b5e/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:15 compute-0 podman[99297]: 2025-12-06 09:43:15.009722402 +0000 UTC m=+0.141163263 container init cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:15 compute-0 podman[99297]: 2025-12-06 09:43:15.016362133 +0000 UTC m=+0.147802954 container start cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 09:43:15 compute-0 bash[99297]: cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e
Dec 06 09:43:15 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec 06 09:43:15 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:15 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.061Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.068Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.068Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec 06 09:43:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:15.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.53µs
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.072Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=35.021µs wal_replay_duration=593.417µs wbl_replay_duration=160ns total_replay_duration=656.729µs
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.073Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.073Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1153 level=info msg="TSDB started"
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.077Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec 06 09:43:15 compute-0 sudo[98413]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=33.444545ms db_storage=1.84µs remote_storage=3.76µs web_handler=1.15µs query_engine=1.96µs scrape=2.841362ms scrape_sd=478.114µs notify=49.852µs notify_sd=267.367µs rules=28.715908ms tracing=18.2µs
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0[99313]: ts=2025-12-06T09:43:15.110Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mgr[74618]: [progress INFO root] complete: finished ev 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1))
Dec 06 09:43:15 compute-0 ceph-mgr[74618]: [progress INFO root] Completed event 12403888-f638-4724-bb9d-df3242ef47cd (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 06 09:43:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92180016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:15 compute-0 ceph-mgr[74618]: [progress INFO root] Writing back 29 completed events
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.17( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 7.14 scrub starts
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 7.14 scrub ok
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 12.1b deep-scrub starts
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 12.1b deep-scrub ok
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 7.13 scrub starts
Dec 06 09:43:15 compute-0 ceph-mon[74327]: 7.13 scrub ok
Dec 06 09:43:15 compute-0 ceph-mon[74327]: osdmap e85: 3 total, 3 up, 3 in
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 06 09:43:15 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:15 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 86 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[58,85)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 06 09:43:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 06 09:43:15 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.525048256s) [0] async=[0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 233.847198486s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.8( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=6 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.524926186s) [0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.847198486s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.522101402s) [0] async=[0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 233.843963623s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.18( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/58 les/c/f=86/59/0 sis=87 pruub=15.521376610s) [0] r=-1 lpr=87 pi=[58,87)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.843963623s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.007025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196007205, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 253, "total_data_size": 165864, "memory_usage": 173816, "flush_reason": "Manual Compaction"}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.7( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196011262, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 165969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7879, "largest_seqno": 8212, "table_properties": {"data_size": 163775, "index_size": 358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4699, "raw_average_key_size": 15, "raw_value_size": 159314, "raw_average_value_size": 525, "num_data_blocks": 16, "num_entries": 303, "num_filter_entries": 303, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014195, "oldest_key_time": 1765014195, "file_creation_time": 1765014196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 4251 microseconds, and 1396 cpu microseconds.
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:43:16 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 87 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=6 ec=58/45 lis/c=83/65 les/c/f=84/66/0 sis=86) [1] r=0 lpr=86 pi=[65,86)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.011296) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 165969 bytes OK
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.011312) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.012959) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.012974) EVENT_LOG_v1 {"time_micros": 1765014196012970, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 163525, prev total WAL file size 163525, number of live WAL files 2.
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013537) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(162KB)], [20(12MB)]
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196013603, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13165752, "oldest_snapshot_seqno": -1}
Dec 06 09:43:16 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec 06 09:43:16 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3233 keys, 12743077 bytes, temperature: kUnknown
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196148111, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12743077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12716654, "index_size": 17225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 83256, "raw_average_key_size": 25, "raw_value_size": 12652293, "raw_average_value_size": 3913, "num_data_blocks": 748, "num_entries": 3233, "num_filter_entries": 3233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.150395) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12743077 bytes
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.153172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.8 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(156.1) write-amplify(76.8) OK, records in: 3756, records dropped: 523 output_compression: NoCompression
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.153201) EVENT_LOG_v1 {"time_micros": 1765014196153188, "job": 6, "event": "compaction_finished", "compaction_time_micros": 134608, "compaction_time_cpu_micros": 45149, "output_level": 6, "num_output_files": 1, "total_output_size": 12743077, "num_input_records": 3756, "num_output_records": 3233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196153380, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014196155610, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.013399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:43:16.155672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:43:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  1: '-n'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  2: 'mgr.compute-0.qhdjwa'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  3: '-f'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  4: '--setuser'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  5: 'ceph'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  6: '--setgroup'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  7: 'ceph'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr respawn  exe_path /proc/self/exe
Dec 06 09:43:16 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.qhdjwa(active, since 107s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:16 compute-0 sshd-session[90448]: Connection closed by 192.168.122.100 port 42686
Dec 06 09:43:16 compute-0 sshd-session[90417]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 09:43:16 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 06 09:43:16 compute-0 systemd[1]: session-35.scope: Consumed 55.388s CPU time.
Dec 06 09:43:16 compute-0 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Dec 06 09:43:16 compute-0 systemd-logind[795]: Removed session 35.
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setuser ceph since I am not root
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ignoring --setgroup ceph since I am not root
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: pidfile_write: ignore empty --pid-file
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'alerts'
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:16.427+0000 7f364ff49140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'balancer'
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:16.507+0000 7f364ff49140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 09:43:16 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'cephadm'
Dec 06 09:43:16 compute-0 ceph-mon[74327]: pgmap v103: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 9.16 scrub starts
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 12.16 scrub starts
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 9.16 scrub ok
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 12.16 scrub ok
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 7.9 scrub starts
Dec 06 09:43:16 compute-0 ceph-mon[74327]: 7.9 scrub ok
Dec 06 09:43:16 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 09:43:16 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 09:43:16 compute-0 ceph-mon[74327]: osdmap e86: 3 total, 3 up, 3 in
Dec 06 09:43:16 compute-0 ceph-mon[74327]: osdmap e87: 3 total, 3 up, 3 in
Dec 06 09:43:16 compute-0 ceph-mon[74327]: from='mgr.14400 192.168.122.100:0/3311628268' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 06 09:43:16 compute-0 ceph-mon[74327]: mgrmap e30: compute-0.qhdjwa(active, since 107s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 06 09:43:17 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec 06 09:43:17 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec 06 09:43:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:17.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 06 09:43:17 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 06 09:43:17 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'crash'
Dec 06 09:43:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:17.422+0000 7f364ff49140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:43:17 compute-0 ceph-mgr[74618]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 09:43:17 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'dashboard'
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 12.14 scrub starts
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 12.14 scrub ok
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 9.7 scrub starts
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 9.7 scrub ok
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 7.1e scrub starts
Dec 06 09:43:17 compute-0 ceph-mon[74327]: 7.1e scrub ok
Dec 06 09:43:17 compute-0 ceph-mon[74327]: osdmap e88: 3 total, 3 up, 3 in
Dec 06 09:43:18 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'devicehealth'
Dec 06 09:43:18 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.105+0000 7f364ff49140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]:   from numpy import show_config as show_numpy_config
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.288+0000 7f364ff49140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'influx'
Dec 06 09:43:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 06 09:43:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 06 09:43:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 06 09:43:18 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:18 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:18 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:18 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 89 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.367+0000 7f364ff49140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'insights'
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'iostat'
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:18.540+0000 7f364ff49140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 09:43:18 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'k8sevents'
Dec 06 09:43:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 12.f scrub starts
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 12.f scrub ok
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 8.2 scrub starts
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 8.2 scrub ok
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 7.18 scrub starts
Dec 06 09:43:18 compute-0 ceph-mon[74327]: 7.18 scrub ok
Dec 06 09:43:18 compute-0 ceph-mon[74327]: osdmap e89: 3 total, 3 up, 3 in
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'localpool'
Dec 06 09:43:19 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 06 09:43:19 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 06 09:43:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:19.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 09:43:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'mirroring'
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'nfs'
Dec 06 09:43:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.674+0000 7f364ff49140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'orchestrator'
Dec 06 09:43:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.903+0000 7f364ff49140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 09:43:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:19.984+0000 7f364ff49140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 09:43:19 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'osd_support'
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.054+0000 7f364ff49140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:20 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 06 09:43:20 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.139+0000 7f364ff49140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'progress'
Dec 06 09:43:20 compute-0 sudo[98975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.227+0000 7f364ff49140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'prometheus'
Dec 06 09:43:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 06 09:43:20 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 06 09:43:20 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 90 pg[10.9( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=6 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:20 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 90 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=5 ec=58/45 lis/c=87/65 les/c/f=88/66/0 sis=89) [1] r=0 lpr=89 pi=[65,89)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 12.1 scrub starts
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 11.17 scrub starts
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 12.1 scrub ok
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 11.17 scrub ok
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 7.10 scrub starts
Dec 06 09:43:20 compute-0 ceph-mon[74327]: 7.10 scrub ok
Dec 06 09:43:20 compute-0 sshd-session[98411]: Connection closed by 192.168.122.30 port 34942
Dec 06 09:43:20 compute-0 sshd-session[98384]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:43:20 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 06 09:43:20 compute-0 systemd[1]: session-37.scope: Consumed 8.697s CPU time.
Dec 06 09:43:20 compute-0 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Dec 06 09:43:20 compute-0 systemd-logind[795]: Removed session 37.
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.617+0000 7f364ff49140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rbd_support'
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:20.718+0000 7f364ff49140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'restful'
Dec 06 09:43:20 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rgw'
Dec 06 09:43:21 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 06 09:43:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:21 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 06 09:43:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.179+0000 7f364ff49140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'rook'
Dec 06 09:43:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.807+0000 7f364ff49140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'selftest'
Dec 06 09:43:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.889+0000 7f364ff49140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'snap_schedule'
Dec 06 09:43:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:21.992+0000 7f364ff49140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 09:43:21 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'stats'
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.13 scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.13 scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.1a scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.1a scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.0 scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.0 scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.a scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.a scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.1e scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.1e scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.c scrub starts
Dec 06 09:43:22 compute-0 ceph-mon[74327]: 11.c scrub ok
Dec 06 09:43:22 compute-0 ceph-mon[74327]: osdmap e90: 3 total, 3 up, 3 in
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'status'
Dec 06 09:43:22 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 06 09:43:22 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.158+0000 7f364ff49140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telegraf'
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.238+0000 7f364ff49140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'telemetry'
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.416+0000 7f364ff49140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 09:43:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:43:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oazbvn started
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92100016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.654+0000 7f364ff49140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'volumes'
Dec 06 09:43:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:22.945+0000 7f364ff49140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 09:43:22 compute-0 ceph-mgr[74618]: mgr[py] Loading python module 'zabbix'
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:23.018+0000 7f364ff49140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qhdjwa
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: ms_deliver_dispatch: unhandled message 0x555807345860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 06 09:43:23 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Dec 06 09:43:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:23 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr handle_mgr_map Activating!
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr handle_mgr_map I am now activating
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.qhdjwa(active, starting, since 0.658318s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 0
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).mds e10 all = 1
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: balancer
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Starting
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Manager daemon compute-0.qhdjwa is now available
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:43:23
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: cephadm
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: crash
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: dashboard
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [dashboard INFO access_control] Loading user roles DB version=2
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: devicehealth
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [dashboard INFO sso] Loading SSO DB version=1
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: iostat
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Starting
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: nfs
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: orchestrator
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: pg_autoscaler
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: progress
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [progress INFO root] Loading...
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f35cf29ad00>, <progress.module.GhostEvent object at 0x7f35cf29af40>, <progress.module.GhostEvent object at 0x7f35cf29af70>, <progress.module.GhostEvent object at 0x7f35cf29afa0>, <progress.module.GhostEvent object at 0x7f35cf29afd0>, <progress.module.GhostEvent object at 0x7f35cf2a8040>, <progress.module.GhostEvent object at 0x7f35cf2a8070>, <progress.module.GhostEvent object at 0x7f35cf2a80a0>, <progress.module.GhostEvent object at 0x7f35cf2a80d0>, <progress.module.GhostEvent object at 0x7f35cf2a8100>, <progress.module.GhostEvent object at 0x7f35cf2a8130>, <progress.module.GhostEvent object at 0x7f35cf2a8160>, <progress.module.GhostEvent object at 0x7f35cf2a8190>, <progress.module.GhostEvent object at 0x7f35cf2a81c0>, <progress.module.GhostEvent object at 0x7f35cf2a81f0>, <progress.module.GhostEvent object at 0x7f35cf2a8220>, <progress.module.GhostEvent object at 0x7f35cf2a8250>, <progress.module.GhostEvent object at 0x7f35cf2a8280>, <progress.module.GhostEvent object at 0x7f35cf2a82b0>, <progress.module.GhostEvent object at 0x7f35cf2a82e0>, <progress.module.GhostEvent object at 0x7f35cf2a8310>, <progress.module.GhostEvent object at 0x7f35cf2a8340>, <progress.module.GhostEvent object at 0x7f35cf2a8370>, <progress.module.GhostEvent object at 0x7f35cf2a83a0>, <progress.module.GhostEvent object at 0x7f35cf2a83d0>, <progress.module.GhostEvent object at 0x7f35cf2a8400>, <progress.module.GhostEvent object at 0x7f35cf2a8430>, <progress.module.GhostEvent object at 0x7f35cf2a8460>, <progress.module.GhostEvent object at 0x7f35cf2a8490>] historic events
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.16 scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.16 scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.1c scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.1c scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.b scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.b scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 12.17 scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 12.17 scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.7 scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.7 scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.9 scrub starts
Dec 06 09:43:23 compute-0 ceph-mon[74327]: 11.9 scrub ok
Dec 06 09:43:23 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn restarted
Dec 06 09:43:23 compute-0 ceph-mon[74327]: Standby manager daemon compute-2.oazbvn started
Dec 06 09:43:23 compute-0 ceph-mon[74327]: Active manager daemon compute-0.qhdjwa restarted
Dec 06 09:43:23 compute-0 ceph-mon[74327]: Activating manager daemon compute-0.qhdjwa
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: prometheus
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus INFO root] Cache enabled
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus INFO root] starting metric collection thread
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus INFO root] Starting engine...
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:23] ENGINE Bus STARTING
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:23] ENGINE Bus STARTING
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: CherryPy Checker:
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: The Application mounted at '' has an empty config.
Dec 06 09:43:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] recovery thread starting
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] starting setup
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: rbd_support
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: restful
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid restarted
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.sauzid started
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 09:43:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"} v 0)
Dec 06 09:43:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: status
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: telemetry
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [restful WARNING root] server not running: no certificate configured
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] PerfHandler: starting
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 09:43:23 compute-0 ceph-mgr[74618]: mgr load Constructed class from module: volumes
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.002+0000 7f35bc622640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T09:43:24.004+0000 7f35b3e11640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: client.0 error registering admin socket command: (17) File exists
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:24] ENGINE Serving on http://:::9283
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:43:24] ENGINE Bus STARTED
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:24] ENGINE Serving on http://:::9283
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:43:24] ENGINE Bus STARTED
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [prometheus INFO root] Engine started.
Dec 06 09:43:24 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec 06 09:43:24 compute-0 sshd-session[99569]: Accepted publickey for ceph-admin from 192.168.122.100 port 53852 ssh2: RSA SHA256:Gxeh0g0CuyN5zOpDUv+8o0JynyC1ASnaMny1857KGxo
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec 06 09:43:24 compute-0 systemd-logind[795]: New session 38 of user ceph-admin.
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec 06 09:43:24 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec 06 09:43:24 compute-0 sshd-session[99569]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 06 09:43:24 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TaskHandler: starting
Dec 06 09:43:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"} v 0)
Dec 06 09:43:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] setup complete
Dec 06 09:43:24 compute-0 sudo[99587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:24 compute-0 sudo[99587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:24 compute-0 sudo[99587]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:24 compute-0 ceph-mgr[74618]: [dashboard INFO dashboard.module] Engine started.
Dec 06 09:43:24 compute-0 sudo[99612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:43:24 compute-0 sudo[99612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:25 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 06 09:43:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:25.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:25 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 06 09:43:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: [cephadm INFO cherrypy.error] [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec 06 09:43:25 compute-0 ceph-mgr[74618]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 09:43:26 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 06 09:43:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003dd0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:27.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:27 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 06 09:43:27 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:43:27 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 06 09:43:27 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 12.9 scrub starts
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 12.9 scrub ok
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 11.5 deep-scrub starts
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 11.5 deep-scrub ok
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 11.d scrub starts
Dec 06 09:43:27 compute-0 ceph-mon[74327]: 11.d scrub ok
Dec 06 09:43:27 compute-0 ceph-mon[74327]: osdmap e91: 3 total, 3 up, 3 in
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mgrmap e31: compute-0.qhdjwa(active, starting, since 0.658318s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ujokui"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fpvjgb"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.czucwy"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qhdjwa", "id": "compute-0.qhdjwa"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-1.sauzid", "id": "compute-1.sauzid"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oazbvn", "id": "compute-2.oazbvn"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: Manager daemon compute-0.qhdjwa is now available
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid restarted
Dec 06 09:43:27 compute-0 ceph-mon[74327]: Standby manager daemon compute-1.sauzid started
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/mirror_snapshot_schedule"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qhdjwa/trash_purge_schedule"}]: dispatch
Dec 06 09:43:27 compute-0 podman[99710]: 2025-12-06 09:43:27.675511237 +0000 UTC m=+2.710197087 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:43:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:43:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 06 09:43:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 09:43:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 06 09:43:27 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 09:43:27 compute-0 podman[99710]: 2025-12-06 09:43:27.77858439 +0000 UTC m=+2.813270210 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 09:43:27 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 09:43:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:28 compute-0 podman[99868]: 2025-12-06 09:43:28.275924468 +0000 UTC m=+0.079167865 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:28 compute-0 podman[99868]: 2025-12-06 09:43:28.313891443 +0000 UTC m=+0.117134850 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:28 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 06 09:43:28 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.qhdjwa(active, since 5s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.7 deep-scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.7 deep-scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.4 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.4 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.2 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.2 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.11 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.11 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.f scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.f scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Bus STARTING
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.18 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.18 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Serving on https://192.168.122.100:7150
Dec 06 09:43:28 compute-0 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Client ('192.168.122.100', 44988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 09:43:28 compute-0 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Serving on http://192.168.122.100:8765
Dec 06 09:43:28 compute-0 ceph-mon[74327]: [06/Dec/2025:09:43:25] ENGINE Bus STARTED
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.1a deep-scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.1d scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.1f scrub starts
Dec 06 09:43:28 compute-0 podman[99961]: 2025-12-06 09:43:28.724688435 +0000 UTC m=+0.100449049 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.1f scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.1a deep-scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.2 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.10 deep-scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.10 deep-scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.1d scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 12.2 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.12 scrub starts
Dec 06 09:43:28 compute-0 ceph-mon[74327]: 11.12 scrub ok
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mgrmap e32: compute-0.qhdjwa(active, since 4s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:28 compute-0 ceph-mon[74327]: pgmap v3: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:28 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:28 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:28 compute-0 ceph-mon[74327]: pgmap v4: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:28 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 09:43:28 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 06 09:43:28 compute-0 podman[99961]: 2025-12-06 09:43:28.740106579 +0000 UTC m=+0.115867183 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 06 09:43:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=92) [1] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=92) [1] r=0 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[6.b( v 50'39 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=11.464945793s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=50'39 mlcod 50'39 active pruub 242.538116455s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 92 pg[6.b( v 50'39 (0'0,50'39] local-lis/les=64/65 n=1 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=11.464897156s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 242.538116455s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:28 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 06 09:43:28 compute-0 podman[100025]: 2025-12-06 09:43:28.965581794 +0000 UTC m=+0.064011068 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:43:28 compute-0 podman[100025]: 2025-12-06 09:43:28.982872413 +0000 UTC m=+0.081301667 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:43:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:29 compute-0 podman[100093]: 2025-12-06 09:43:29.250915606 +0000 UTC m=+0.076826938 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Dec 06 09:43:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:29 compute-0 podman[100093]: 2025-12-06 09:43:29.297437077 +0000 UTC m=+0.123348349 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793)
Dec 06 09:43:29 compute-0 podman[100159]: 2025-12-06 09:43:29.563707339 +0000 UTC m=+0.055732619 container exec b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:29 compute-0 podman[100159]: 2025-12-06 09:43:29.609003396 +0000 UTC m=+0.101028666 container exec_died b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:29 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 06 09:43:29 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.8 scrub starts
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.8 scrub ok
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.11 scrub starts
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.11 scrub ok
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.1 scrub starts
Dec 06 09:43:29 compute-0 ceph-mon[74327]: 11.1 scrub ok
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mgrmap e33: compute-0.qhdjwa(active, since 5s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 09:43:29 compute-0 ceph-mon[74327]: osdmap e92: 3 total, 3 up, 3 in
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 06 09:43:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v7: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:29 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 09:43:29 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=68/68 les/c/f=69/69/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:29 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 09:43:29 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 93 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=67/67 les/c/f=68/68/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:29 compute-0 podman[100233]: 2025-12-06 09:43:29.827364245 +0000 UTC m=+0.057811048 container exec cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:30 compute-0 podman[100233]: 2025-12-06 09:43:30.00396946 +0000 UTC m=+0.234416243 container exec_died cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:30 compute-0 podman[100341]: 2025-12-06 09:43:30.497943661 +0000 UTC m=+0.065721457 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:30 compute-0 podman[100341]: 2025-12-06 09:43:30.551120645 +0000 UTC m=+0.118898411 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:30 compute-0 sudo[99612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 06 09:43:30 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 06 09:43:30 compute-0 sudo[100382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:30 compute-0 sudo[100382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:30 compute-0 sudo[100382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.19 scrub starts
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.19 scrub ok
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.6 scrub starts
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.6 scrub ok
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.1b scrub starts
Dec 06 09:43:30 compute-0 ceph-mon[74327]: 11.1b scrub ok
Dec 06 09:43:30 compute-0 ceph-mon[74327]: osdmap e93: 3 total, 3 up, 3 in
Dec 06 09:43:30 compute-0 ceph-mon[74327]: pgmap v7: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 09:43:30 compute-0 ceph-mon[74327]: mgrmap e34: compute-0.qhdjwa(active, since 6s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:30 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 06 09:43:30 compute-0 sudo[100407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:43:30 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 94 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:30 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 94 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:30 compute-0 sudo[100407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:30] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 06 09:43:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:30] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Dec 06 09:43:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:31 compute-0 sudo[100407]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:31 compute-0 sudo[100467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:31 compute-0 sudo[100467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:31 compute-0 sudo[100467]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:31 compute-0 sudo[100492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 06 09:43:31 compute-0 sudo[100492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:31 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 06 09:43:31 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v9: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 12.3 scrub starts
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 12.3 scrub ok
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 11.15 scrub starts
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 11.15 scrub ok
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 11.14 scrub starts
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 11.14 scrub ok
Dec 06 09:43:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 09:43:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 09:43:31 compute-0 ceph-mon[74327]: osdmap e94: 3 total, 3 up, 3 in
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 10.17 scrub starts
Dec 06 09:43:31 compute-0 ceph-mon[74327]: 10.17 scrub ok
Dec 06 09:43:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=95) [1] r=0 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=95) [1] r=0 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:31 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 95 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:31 compute-0 sudo[100492]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:43:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:43:31 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:43:32 compute-0 sudo[100534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:43:32 compute-0 sudo[100534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100534]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:32 compute-0 sudo[100559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:43:32 compute-0 sudo[100559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100559]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:32 compute-0 sudo[100609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100609]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100634]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100682]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100707]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 sudo[100732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 09:43:32 compute-0 sudo[100732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 06 09:43:32 compute-0 sudo[100757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:43:32 compute-0 sudo[100757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003e10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:32 compute-0 sudo[100757]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 06 09:43:32 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 06 09:43:32 compute-0 sudo[100782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:43:32 compute-0 sudo[100782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100807]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:32 compute-0 sudo[100832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100832]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 compute-0 sudo[100857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:43:32 compute-0 sudo[100857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:32 compute-0 sudo[100857]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:33 compute-0 sudo[100906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:43:33 compute-0 sudo[100906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[100906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 sudo[100931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new
Dec 06 09:43:33 compute-0 sudo[100931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[100931]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:33 compute-0 sudo[100957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:33 compute-0 sudo[100957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:33.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:33 compute-0 sudo[100957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 sudo[100982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 09:43:33 compute-0 sudo[100982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[100982]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 sudo[101007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph
Dec 06 09:43:33 compute-0 sudo[101007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[101007]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 sudo[101032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:43:33 compute-0 sudo[101032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[101032]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 sudo[101057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:33 compute-0 sudo[101057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[101057]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 sudo[101082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:43:33 compute-0 sudo[101082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[101082]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v11: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:33 compute-0 sudo[101130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:43:33 compute-0 sudo[101130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:33 compute-0 sudo[101130]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 compute-0 sudo[101155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new
Dec 06 09:43:34 compute-0 sudo[101155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101155]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:34 compute-0 sudo[101180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:34 compute-0 sudo[101180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101180]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:34 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:34 compute-0 sudo[101205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:43:34 compute-0 sudo[101205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101205]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 sudo[101230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config
Dec 06 09:43:34 compute-0 sudo[101230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101230]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 sudo[101255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:43:34 compute-0 sudo[101255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101255]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:43:34 compute-0 sudo[101280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:34 compute-0 sudo[101280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101280]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:43:34 compute-0 sudo[101305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:43:34 compute-0 sudo[101305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101305]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Dec 06 09:43:34 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 06 09:43:34 compute-0 sudo[101353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:43:34 compute-0 sudo[101353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101353]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 sudo[101378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new
Dec 06 09:43:34 compute-0 sudo[101378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101378]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 sudo[101403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5ecd3f74-dade-5fc4-92ce-8950ae424258/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring.new /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:34 compute-0 sudo[101403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:34 compute-0 sudo[101403]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 06 09:43:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003e30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:35 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Dec 06 09:43:35 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=74/74 les/c/f=75/75/0 sis=96) [1]/[2] r=-1 lpr=96 pi=[74,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:35 compute-0 ceph-mon[74327]: 12.1d scrub starts
Dec 06 09:43:35 compute-0 ceph-mon[74327]: 12.1d scrub ok
Dec 06 09:43:35 compute-0 ceph-mon[74327]: 12.10 scrub starts
Dec 06 09:43:35 compute-0 ceph-mon[74327]: 12.10 scrub ok
Dec 06 09:43:35 compute-0 ceph-mon[74327]: pgmap v9: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 09:43:35 compute-0 ceph-mon[74327]: osdmap e95: 3 total, 3 up, 3 in
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:43:35 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 09:43:35 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 09:43:35 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.a( v 51'1027 (0'0,51'1027] local-lis/les=95/96 n=9 ec=58/45 lis/c=93/68 les/c/f=94/69/0 sis=95) [1] r=0 lpr=95 pi=[68,95)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:35 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 96 pg[10.1a( v 51'1027 (0'0,51'1027] local-lis/les=95/96 n=4 ec=58/45 lis/c=93/67 les/c/f=94/68/0 sis=95) [1] r=0 lpr=95 pi=[67,95)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v13: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 09:43:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:43:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:43:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:36 compute-0 sshd-session[101430]: Accepted publickey for zuul from 192.168.122.30 port 48974 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:43:36 compute-0 systemd-logind[795]: New session 39 of user zuul.
Dec 06 09:43:36 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 06 09:43:36 compute-0 sshd-session[101430]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.4 deep-scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.4 deep-scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.c scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.c scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.conf
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 10.0 scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 10.0 scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.1e scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.1e scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.a scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.a scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 10.f scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-2:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: pgmap v11: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-1:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.13 scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: Updating compute-0:/var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/config/ceph.client.admin.keyring
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.13 scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.b scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.b scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 6.3 deep-scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 10.f scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.18 scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.18 scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 6.3 deep-scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: osdmap e96: 3 total, 3 up, 3 in
Dec 06 09:43:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.e scrub starts
Dec 06 09:43:36 compute-0 ceph-mon[74327]: 12.e scrub ok
Dec 06 09:43:36 compute-0 ceph-mon[74327]: pgmap v13: 337 pgs: 2 unknown, 2 remapped+peering, 2 peering, 1 active+clean+scrubbing, 330 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 09:43:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:43:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:37 compute-0 sudo[101585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:37 compute-0 sudo[101585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:37 compute-0 sudo[101585]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:37 compute-0 python3.9[101584]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 09:43:37 compute-0 sudo[101611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:43:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:37 compute-0 sudo[101611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.570842967 +0000 UTC m=+0.065305406 container create 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:43:37 compute-0 systemd[90433]: Starting Mark boot as successful...
Dec 06 09:43:37 compute-0 systemd[1]: Started libpod-conmon-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope.
Dec 06 09:43:37 compute-0 systemd[90433]: Finished Mark boot as successful.
Dec 06 09:43:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.542252612 +0000 UTC m=+0.036715051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.642632358 +0000 UTC m=+0.137094777 container init 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.650954188 +0000 UTC m=+0.145416587 container start 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:37 compute-0 clever_kepler[101774]: 167 167
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.657164987 +0000 UTC m=+0.151627396 container attach 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 09:43:37 compute-0 systemd[1]: libpod-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope: Deactivated successfully.
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.659188595 +0000 UTC m=+0.153651014 container died 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-02035fe57f41a745e6feba48b0d1867781b6ea1d261ab74011252ce2cd9a8c72-merged.mount: Deactivated successfully.
Dec 06 09:43:37 compute-0 podman[101756]: 2025-12-06 09:43:37.711746462 +0000 UTC m=+0.206208861 container remove 06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:43:37 compute-0 systemd[1]: libpod-conmon-06f64820532842837780e2172bddb9ffaa221de98e25abf6e5b8e9aceb2200fe.scope: Deactivated successfully.
Dec 06 09:43:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v15: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec 06 09:43:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 06 09:43:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 06 09:43:37 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 98 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=5 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:37 compute-0 ceph-mon[74327]: 12.19 scrub starts
Dec 06 09:43:37 compute-0 ceph-mon[74327]: 12.19 scrub ok
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:37 compute-0 ceph-mon[74327]: osdmap e97: 3 total, 3 up, 3 in
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:43:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:37 compute-0 podman[101845]: 2025-12-06 09:43:37.887263044 +0000 UTC m=+0.059591949 container create ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:43:37 compute-0 systemd[1]: Started libpod-conmon-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope.
Dec 06 09:43:37 compute-0 podman[101845]: 2025-12-06 09:43:37.862344897 +0000 UTC m=+0.034673822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:37 compute-0 podman[101845]: 2025-12-06 09:43:37.986358064 +0000 UTC m=+0.158686989 container init ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:43:37 compute-0 podman[101845]: 2025-12-06 09:43:37.997027552 +0000 UTC m=+0.169356467 container start ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:38 compute-0 podman[101845]: 2025-12-06 09:43:38.000879003 +0000 UTC m=+0.173207968 container attach ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:43:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:38 compute-0 python3.9[101914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:43:38 compute-0 priceless_borg[101909]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:43:38 compute-0 priceless_borg[101909]: --> All data devices are unavailable
Dec 06 09:43:38 compute-0 systemd[1]: libpod-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope: Deactivated successfully.
Dec 06 09:43:38 compute-0 podman[101845]: 2025-12-06 09:43:38.388442243 +0000 UTC m=+0.560771178 container died ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 09:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f663061767eb1bc4c0fc0d186ffbb53828a1861da9cbcfee94d561f0db2371c4-merged.mount: Deactivated successfully.
Dec 06 09:43:38 compute-0 podman[101845]: 2025-12-06 09:43:38.442337809 +0000 UTC m=+0.614666714 container remove ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:43:38 compute-0 systemd[1]: libpod-conmon-ce5c9e0469bcbeec2f832a06477bcdc3527f060496e3e4972dc15e1a50fe8783.scope: Deactivated successfully.
Dec 06 09:43:38 compute-0 sudo[101611]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:38 compute-0 sudo[101964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:38 compute-0 sudo[101964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:38 compute-0 sudo[101964]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:38 compute-0 sudo[101991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:43:38 compute-0 sudo[101991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 06 09:43:38 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 06 09:43:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 06 09:43:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 06 09:43:38 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 06 09:43:38 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.c( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=6 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:38 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.b( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=6 ec=58/45 lis/c=95/65 les/c/f=96/66/0 sis=98) [1] r=0 lpr=98 pi=[65,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:43:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:38 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1027 (0'0,51'1027] local-lis/les=98/99 n=5 ec=58/45 lis/c=96/74 les/c/f=97/75/0 sis=98) [1] r=0 lpr=98 pi=[74,98)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:38 compute-0 ceph-mon[74327]: 12.8 scrub starts
Dec 06 09:43:38 compute-0 ceph-mon[74327]: 12.8 scrub ok
Dec 06 09:43:38 compute-0 ceph-mon[74327]: pgmap v15: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec 06 09:43:38 compute-0 ceph-mon[74327]: osdmap e98: 3 total, 3 up, 3 in
Dec 06 09:43:38 compute-0 ceph-mon[74327]: 10.1b scrub starts
Dec 06 09:43:38 compute-0 ceph-mon[74327]: 10.1b scrub ok
Dec 06 09:43:38 compute-0 ceph-mon[74327]: osdmap e99: 3 total, 3 up, 3 in
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.049203946 +0000 UTC m=+0.042998732 container create 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:43:39 compute-0 systemd[1]: Started libpod-conmon-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope.
Dec 06 09:43:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.033056189 +0000 UTC m=+0.026850995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.167971442 +0000 UTC m=+0.161766278 container init 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.174533361 +0000 UTC m=+0.168328697 container start 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.178375182 +0000 UTC m=+0.172170008 container attach 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:39 compute-0 vigorous_heisenberg[102172]: 167 167
Dec 06 09:43:39 compute-0 systemd[1]: libpod-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope: Deactivated successfully.
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.182589614 +0000 UTC m=+0.176384410 container died 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0204f9b44ea9921a3499e2e127016faeaebe6dc7340aa36dea2588e2370cab5-merged.mount: Deactivated successfully.
Dec 06 09:43:39 compute-0 sudo[102211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxqiaybletedywsyyuzobemeceqlzuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014218.755813-93-120001985956485/AnsiballZ_command.py'
Dec 06 09:43:39 compute-0 podman[102119]: 2025-12-06 09:43:39.232849864 +0000 UTC m=+0.226644650 container remove 7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_heisenberg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:43:39 compute-0 sudo[102211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:39 compute-0 systemd[1]: libpod-conmon-7beac17a8404e53095acbd7edfed04f56ff035efe97f82babfa684745d5226a5.scope: Deactivated successfully.
Dec 06 09:43:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:39.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:39 compute-0 python3.9[102219]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:43:39 compute-0 sudo[102211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.448691061 +0000 UTC m=+0.073819532 container create f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:43:39 compute-0 systemd[1]: Started libpod-conmon-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope.
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.420512958 +0000 UTC m=+0.045641449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.538885152 +0000 UTC m=+0.164013643 container init f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.551654441 +0000 UTC m=+0.176782912 container start f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.554246866 +0000 UTC m=+0.179375337 container attach f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:43:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 06 09:43:39 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 06 09:43:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 2 objects/s recovering
Dec 06 09:43:39 compute-0 quirky_pascal[102257]: {
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:     "1": [
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:         {
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "devices": [
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "/dev/loop3"
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             ],
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "lv_name": "ceph_lv0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "lv_size": "21470642176",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "name": "ceph_lv0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "tags": {
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.cluster_name": "ceph",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.crush_device_class": "",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.encrypted": "0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.osd_id": "1",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.type": "block",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.vdo": "0",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:                 "ceph.with_tpm": "0"
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             },
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "type": "block",
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:             "vg_name": "ceph_vg0"
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:         }
Dec 06 09:43:39 compute-0 quirky_pascal[102257]:     ]
Dec 06 09:43:39 compute-0 quirky_pascal[102257]: }
Dec 06 09:43:39 compute-0 systemd[1]: libpod-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope: Deactivated successfully.
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.852760237 +0000 UTC m=+0.477888708 container died f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 06 09:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-71804d84db8b3a28b833fc5e4be69f8e7aebd794d7ab060f109a7a640fdf1c4a-merged.mount: Deactivated successfully.
Dec 06 09:43:39 compute-0 ceph-mon[74327]: 12.6 deep-scrub starts
Dec 06 09:43:39 compute-0 ceph-mon[74327]: 12.6 deep-scrub ok
Dec 06 09:43:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:39 compute-0 ceph-mon[74327]: 10.b scrub starts
Dec 06 09:43:39 compute-0 ceph-mon[74327]: 10.b scrub ok
Dec 06 09:43:39 compute-0 podman[102227]: 2025-12-06 09:43:39.902650647 +0000 UTC m=+0.527779118 container remove f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:43:39 compute-0 systemd[1]: libpod-conmon-f070e952adf6f63027ff768052b9e77b2fd2372b526d66f68f2cb998b47bbb2b.scope: Deactivated successfully.
Dec 06 09:43:39 compute-0 sudo[101991]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:40 compute-0 sudo[102341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:40 compute-0 sudo[102341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:40 compute-0 sudo[102341]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:40 compute-0 sudo[102366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:43:40 compute-0 sudo[102366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:40 compute-0 sudo[102478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flwqvyilqwopirkmzyvhldlaaaugoofv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014219.855096-129-53346991085649/AnsiballZ_stat.py'
Dec 06 09:43:40 compute-0 sudo[102478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.571068219 +0000 UTC m=+0.063907563 container create e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:43:40 compute-0 python3.9[102489]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:43:40 compute-0 systemd[1]: Started libpod-conmon-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope.
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.540587501 +0000 UTC m=+0.033426905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:40 compute-0 sudo[102478]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.664717642 +0000 UTC m=+0.157557006 container init e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.673048762 +0000 UTC m=+0.165888096 container start e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.676614545 +0000 UTC m=+0.169453919 container attach e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:43:40 compute-0 systemd[1]: libpod-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope: Deactivated successfully.
Dec 06 09:43:40 compute-0 epic_ramanujan[102526]: 167 167
Dec 06 09:43:40 compute-0 conmon[102526]: conmon e7dc12dcc5e4a2d6714a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope/container/memory.events
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.681082524 +0000 UTC m=+0.173921868 container died e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-33c835eb5d49df1f217685264f2d3ce3297695a0f3af6ef8b56ee8cef0429021-merged.mount: Deactivated successfully.
Dec 06 09:43:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:40 compute-0 podman[102507]: 2025-12-06 09:43:40.729692306 +0000 UTC m=+0.222531670 container remove e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 09:43:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Dec 06 09:43:40 compute-0 systemd[1]: libpod-conmon-e7dc12dcc5e4a2d6714a09e01bec4d3eb0642b7b971c008570a8ff460f9e4156.scope: Deactivated successfully.
Dec 06 09:43:40 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Dec 06 09:43:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 06 09:43:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 06 09:43:40 compute-0 ceph-mon[74327]: 12.12 scrub starts
Dec 06 09:43:40 compute-0 ceph-mon[74327]: 12.12 scrub ok
Dec 06 09:43:40 compute-0 ceph-mon[74327]: pgmap v18: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 2 objects/s recovering
Dec 06 09:43:40 compute-0 ceph-mon[74327]: 6.2 deep-scrub starts
Dec 06 09:43:40 compute-0 ceph-mon[74327]: 6.2 deep-scrub ok
Dec 06 09:43:40 compute-0 podman[102573]: 2025-12-06 09:43:40.963387838 +0000 UTC m=+0.076635752 container create 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec 06 09:43:41 compute-0 systemd[1]: Started libpod-conmon-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope.
Dec 06 09:43:41 compute-0 podman[102573]: 2025-12-06 09:43:40.929892312 +0000 UTC m=+0.043140306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:41 compute-0 podman[102573]: 2025-12-06 09:43:41.059238783 +0000 UTC m=+0.172486717 container init 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:43:41 compute-0 podman[102573]: 2025-12-06 09:43:41.067578024 +0000 UTC m=+0.180825938 container start 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:41 compute-0 podman[102573]: 2025-12-06 09:43:41.070809237 +0000 UTC m=+0.184057151 container attach 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:43:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:41.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:41 compute-0 sudo[102763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deeuumxbpqrpyizjfilkergiqpfxufyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014221.169495-162-266577993256566/AnsiballZ_file.py'
Dec 06 09:43:41 compute-0 sudo[102763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 06 09:43:41 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 06 09:43:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v19: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Dec 06 09:43:41 compute-0 python3.9[102771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:41 compute-0 lvm[102794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:43:41 compute-0 lvm[102794]: VG ceph_vg0 finished
Dec 06 09:43:41 compute-0 sudo[102763]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:41 compute-0 jovial_shamir[102591]: {}
Dec 06 09:43:41 compute-0 systemd[1]: libpod-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Deactivated successfully.
Dec 06 09:43:41 compute-0 systemd[1]: libpod-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Consumed 1.319s CPU time.
Dec 06 09:43:41 compute-0 podman[102573]: 2025-12-06 09:43:41.905346983 +0000 UTC m=+1.018594897 container died 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 09:43:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:42 compute-0 ceph-mon[74327]: 12.1c scrub starts
Dec 06 09:43:42 compute-0 ceph-mon[74327]: 12.1c scrub ok
Dec 06 09:43:42 compute-0 ceph-mon[74327]: 11.e scrub starts
Dec 06 09:43:42 compute-0 ceph-mon[74327]: 11.e scrub ok
Dec 06 09:43:42 compute-0 ceph-mon[74327]: 6.7 scrub starts
Dec 06 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-631b47ad139d32564852a5b24cff0bcd65a7f4a27e703775e9287a1c93dc79db-merged.mount: Deactivated successfully.
Dec 06 09:43:42 compute-0 podman[102573]: 2025-12-06 09:43:42.321860068 +0000 UTC m=+1.435107982 container remove 81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:43:42 compute-0 sudo[102960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqpseyennznhhgrgvjnpywmegffoxyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014222.040664-189-86303179047926/AnsiballZ_file.py'
Dec 06 09:43:42 compute-0 sudo[102960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:42 compute-0 sudo[102366]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:42 compute-0 systemd[1]: libpod-conmon-81d202f129476f440855812e23f92d207c8fa03af17c9f1207ed45f81d7b6d53.scope: Deactivated successfully.
Dec 06 09:43:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 06 09:43:42 compute-0 python3.9[102962]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:42 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 06 09:43:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:42 compute-0 sudo[102960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:43 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 06 09:43:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec 06 09:43:43 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 06 09:43:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v20: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 06 09:43:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 09:43:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 06 09:43:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 10.12 scrub starts
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 10.12 scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 6.7 scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: pgmap v19: 337 pgs: 1 peering, 3 active+remapped, 333 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 11.3 scrub starts
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 11.3 scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 10.d deep-scrub starts
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 10.d deep-scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 6.a scrub starts
Dec 06 09:43:44 compute-0 ceph-mon[74327]: 6.a scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:44 compute-0 sudo[103065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:43:44 compute-0 sudo[103067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:43:44 compute-0 sudo[103067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:44 compute-0 sudo[103065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:44 compute-0 sudo[103067]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:44 compute-0 sudo[103065]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:44 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 06 09:43:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:44 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 09:43:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 06 09:43:44 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 09:43:44 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 09:43:44 compute-0 python3.9[103164]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 06 09:43:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:43:44 compute-0 network[103182]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:43:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:43:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:44 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:43:44 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:43:44 compute-0 network[103183]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:43:44 compute-0 network[103184]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:43:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:45.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:45 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 100 pg[6.e( empty local-lis/les=0/0 n=0 ec=54/21 lis/c=76/76 les/c/f=77/77/0 sis=100) [1] r=0 lpr=100 pi=[76,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001040 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:45.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 6.1 scrub starts
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 6.1 scrub ok
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.8 deep-scrub starts
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.8 deep-scrub ok
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.6 scrub starts
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.6 scrub ok
Dec 06 09:43:45 compute-0 ceph-mon[74327]: pgmap v20: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.19 scrub starts
Dec 06 09:43:45 compute-0 ceph-mon[74327]: 10.19 scrub ok
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 09:43:45 compute-0 ceph-mon[74327]: osdmap e100: 3 total, 3 up, 3 in
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:43:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:45 compute-0 sudo[103189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 06 09:43:45 compute-0 sudo[103189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:45 compute-0 sudo[103189]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:45 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 06 09:43:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v22: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:45 compute-0 sudo[103218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:45 compute-0 sudo[103218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 06 09:43:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248001ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 06 09:43:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 09:43:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 06 09:43:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 09:43:46 compute-0 podman[103275]: 2025-12-06 09:43:46.146158006 +0000 UTC m=+0.025059374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:43:46 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 06 09:43:46 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 06 09:43:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094346 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:43:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:46 compute-0 podman[103275]: 2025-12-06 09:43:46.913393549 +0000 UTC m=+0.792294937 container create 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 06 09:43:47 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 06 09:43:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=79/79 les/c/f=80/80/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[79,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.1 scrub starts
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.1 scrub ok
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 6.e scrub starts
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 6.e scrub ok
Dec 06 09:43:47 compute-0 ceph-mon[74327]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 09:43:47 compute-0 ceph-mon[74327]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.5 scrub starts
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.5 scrub ok
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.c scrub starts
Dec 06 09:43:47 compute-0 ceph-mon[74327]: 10.c scrub ok
Dec 06 09:43:47 compute-0 ceph-mon[74327]: pgmap v22: 337 pgs: 337 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 09:43:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 09:43:47 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 101 pg[6.e( v 50'39 lc 48'19 (0'0,50'39] local-lis/les=100/101 n=1 ec=54/21 lis/c=76/76 les/c/f=77/77/0 sis=100) [1] r=0 lpr=100 pi=[76,100)/1 crt=50'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:47 compute-0 systemd[1]: Started libpod-conmon-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope.
Dec 06 09:43:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:47.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:47 compute-0 podman[103275]: 2025-12-06 09:43:47.180162986 +0000 UTC m=+1.059064354 container init 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:43:47 compute-0 podman[103275]: 2025-12-06 09:43:47.191985657 +0000 UTC m=+1.070886995 container start 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:47 compute-0 agitated_edison[103342]: 167 167
Dec 06 09:43:47 compute-0 systemd[1]: libpod-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope: Deactivated successfully.
Dec 06 09:43:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:47.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:47 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 06 09:43:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v24: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:47 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 06 09:43:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 06 09:43:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:48 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 06 09:43:48 compute-0 podman[103275]: 2025-12-06 09:43:48.803057656 +0000 UTC m=+2.681959024 container attach 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:48 compute-0 podman[103275]: 2025-12-06 09:43:48.804697664 +0000 UTC m=+2.683599052 container died 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:43:48 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 06 09:43:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:49 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 06 09:43:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v25: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 09:43:50 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 09:43:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 06 09:43:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 06 09:43:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 06 09:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db180d0f7ace9ce4bf8917b1ef307d85eb931e1c865b5599773133fc94b2561-merged.mount: Deactivated successfully.
Dec 06 09:43:50 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 06 09:43:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 06 09:43:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.4 scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.4 scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.3 scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.3 scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.18 scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.18 scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.1c scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.1c scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: osdmap e101: 3 total, 3 up, 3 in
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.14 scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.14 scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 6.b scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 6.b scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.1e scrub starts
Dec 06 09:43:50 compute-0 ceph-mon[74327]: pgmap v24: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.1e scrub ok
Dec 06 09:43:50 compute-0 ceph-mon[74327]: 10.10 scrub starts
Dec 06 09:43:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 102 pg[6.f( v 50'39 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=102 pruub=13.283234596s) [0] r=-1 lpr=102 pi=[64,102)/1 crt=50'39 mlcod 50'39 active pruub 266.538940430s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 102 pg[6.f( v 50'39 (0'0,50'39] local-lis/les=64/65 n=3 ec=54/21 lis/c=64/64 les/c/f=65/65/0 sis=102 pruub=13.283174515s) [0] r=-1 lpr=102 pi=[64,102)/1 crt=50'39 mlcod 0'0 unknown NOTIFY pruub 266.538940430s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:50 compute-0 podman[103275]: 2025-12-06 09:43:50.962720359 +0000 UTC m=+4.841621697 container remove 84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14 (image=quay.io/ceph/ceph:v19, name=agitated_edison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:43:50 compute-0 systemd[1]: libpod-conmon-84f1608657b77805602d3b167cdc51ad12fa111ce5b1ce2563116e69b4317d14.scope: Deactivated successfully.
Dec 06 09:43:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:51 compute-0 sudo[103218]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:51 compute-0 python3.9[103548]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:51 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec 06 09:43:51 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec 06 09:43:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 09:43:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:43:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 09:43:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:43:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:51 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:43:51 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:43:51 compute-0 sudo[103597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:51 compute-0 sudo[103597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:51 compute-0 sudo[103597]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v27: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:51 compute-0 sudo[103651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:51 compute-0 sudo[103651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:52 compute-0 podman[103767]: 2025-12-06 09:43:52.121015864 +0000 UTC m=+0.029713108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec 06 09:43:52 compute-0 python3.9[103749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:43:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 06 09:43:52 compute-0 podman[103767]: 2025-12-06 09:43:52.609448225 +0000 UTC m=+0.518145399 container create 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:43:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:52 compute-0 systemd[1]: Started libpod-conmon-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope.
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 06 09:43:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:53.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 10.15 scrub starts
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 10.15 scrub ok
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 10.10 scrub ok
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 10.9 scrub starts
Dec 06 09:43:53 compute-0 ceph-mon[74327]: pgmap v25: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 10.9 scrub ok
Dec 06 09:43:53 compute-0 ceph-mon[74327]: osdmap e102: 3 total, 3 up, 3 in
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 6.5 scrub starts
Dec 06 09:43:53 compute-0 ceph-mon[74327]: 6.5 scrub ok
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qhdjwa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:43:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:53 compute-0 podman[103767]: 2025-12-06 09:43:53.195884513 +0000 UTC m=+1.104581757 container init 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 06 09:43:53 compute-0 podman[103767]: 2025-12-06 09:43:53.211675109 +0000 UTC m=+1.120372293 container start 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 06 09:43:53 compute-0 intelligent_swartz[103812]: 167 167
Dec 06 09:43:53 compute-0 systemd[1]: libpod-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope: Deactivated successfully.
Dec 06 09:43:53 compute-0 podman[103767]: 2025-12-06 09:43:53.218045963 +0000 UTC m=+1.126743137 container attach 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:43:53 compute-0 podman[103767]: 2025-12-06 09:43:53.218838805 +0000 UTC m=+1.127535979 container died 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec 06 09:43:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000057s ======
Dec 06 09:43:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Dec 06 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-821a6ef3bb6356adb22830808dc4926ac091537afd201d4a06090c79b5ef9ed5-merged.mount: Deactivated successfully.
Dec 06 09:43:53 compute-0 podman[103767]: 2025-12-06 09:43:53.354565161 +0000 UTC m=+1.263262305 container remove 3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587 (image=quay.io/ceph/ceph:v19, name=intelligent_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:43:53 compute-0 systemd[1]: libpod-conmon-3c3c38c3d9c91f6af619398c9bd9d048f4f4b0b4156806b05f9c3f18730ad587.scope: Deactivated successfully.
Dec 06 09:43:53 compute-0 sudo[103651]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 09:43:53 compute-0 sudo[103956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:53 compute-0 sudo[103956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:53 compute-0 sudo[103956]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:53 compute-0 sudo[103981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:53 compute-0 sudo[103981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:53 compute-0 python3.9[103955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v29: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 06 09:43:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 104 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:43:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:43:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:43:53 compute-0 podman[104027]: 2025-12-06 09:43:53.98022347 +0000 UTC m=+0.056065768 container create 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 09:43:54 compute-0 systemd[1]: Started libpod-conmon-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope.
Dec 06 09:43:54 compute-0 ceph-mon[74327]: Reconfiguring mgr.compute-0.qhdjwa (monmap changed)...
Dec 06 09:43:54 compute-0 ceph-mon[74327]: Reconfiguring daemon mgr.compute-0.qhdjwa on compute-0
Dec 06 09:43:54 compute-0 ceph-mon[74327]: pgmap v27: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:54 compute-0 ceph-mon[74327]: osdmap e103: 3 total, 3 up, 3 in
Dec 06 09:43:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:43:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:54 compute-0 ceph-mon[74327]: osdmap e104: 3 total, 3 up, 3 in
Dec 06 09:43:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:53.951913624 +0000 UTC m=+0.027756012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:54.070027901 +0000 UTC m=+0.145870239 container init 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:54.081509502 +0000 UTC m=+0.157351810 container start 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:54.085388165 +0000 UTC m=+0.161230513 container attach 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:43:54 compute-0 upbeat_jackson[104044]: 167 167
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:54.087731082 +0000 UTC m=+0.163573380 container died 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:43:54 compute-0 systemd[1]: libpod-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope: Deactivated successfully.
Dec 06 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d40f1315612256cf2011f7e2e3418ec77243a61d2b4f1bde08cea13ddfbff4ce-merged.mount: Deactivated successfully.
Dec 06 09:43:54 compute-0 podman[104027]: 2025-12-06 09:43:54.142266425 +0000 UTC m=+0.218108763 container remove 4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:43:54 compute-0 systemd[1]: libpod-conmon-4f73ff80d7fb5c126a6c1ae99d8de387f18287205a9622db7875537920bba003.scope: Deactivated successfully.
Dec 06 09:43:54 compute-0 sudo[103981]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:54 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 06 09:43:54 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 06 09:43:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:54 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec 06 09:43:54 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec 06 09:43:54 compute-0 sudo[104084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:54 compute-0 sudo[104084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:54 compute-0 sudo[104084]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:54 compute-0 sudo[104109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:54 compute-0 sudo[104109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.808443374 +0000 UTC m=+0.046853973 container create 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 06 09:43:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 06 09:43:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 06 09:43:54 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 105 pg[10.d( v 51'1027 (0'0,51'1027] local-lis/les=104/105 n=8 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:54 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 105 pg[10.1d( v 51'1027 (0'0,51'1027] local-lis/les=104/105 n=5 ec=58/45 lis/c=101/79 les/c/f=102/80/0 sis=104) [1] r=0 lpr=104 pi=[79,104)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:54 compute-0 systemd[1]: Started libpod-conmon-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope.
Dec 06 09:43:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.786537642 +0000 UTC m=+0.024948251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.892780747 +0000 UTC m=+0.131191366 container init 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.899174441 +0000 UTC m=+0.137585030 container start 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.902422985 +0000 UTC m=+0.140833574 container attach 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:43:54 compute-0 stupefied_antonelli[104267]: 167 167
Dec 06 09:43:54 compute-0 systemd[1]: libpod-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope: Deactivated successfully.
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.904834134 +0000 UTC m=+0.143244723 container died 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab48666baafdb3d1f259bc11dbdc4bd1a7e4f9b5c8374efaaa3afa503246fbbe-merged.mount: Deactivated successfully.
Dec 06 09:43:54 compute-0 podman[104226]: 2025-12-06 09:43:54.947456504 +0000 UTC m=+0.185867093 container remove 0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 09:43:54 compute-0 sudo[104305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgimqygtupbwrtswtadigbzwrvamqfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014234.5646703-333-83359031135156/AnsiballZ_setup.py'
Dec 06 09:43:54 compute-0 sudo[104305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:54 compute-0 systemd[1]: libpod-conmon-0201ecc721b4a8b418f7d547b671038c8f6dd77ce21ebc07cb359b970ae88fe3.scope: Deactivated successfully.
Dec 06 09:43:55 compute-0 sudo[104109]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:55.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:55 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 06 09:43:55 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 06 09:43:55 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 06 09:43:55 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 06 09:43:55 compute-0 sudo[104320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:55 compute-0 ceph-mon[74327]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 09:43:55 compute-0 ceph-mon[74327]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 09:43:55 compute-0 ceph-mon[74327]: pgmap v29: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 ceph-mon[74327]: Reconfiguring osd.1 (monmap changed)...
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:55 compute-0 ceph-mon[74327]: Reconfiguring daemon osd.1 on compute-0
Dec 06 09:43:55 compute-0 ceph-mon[74327]: osdmap e105: 3 total, 3 up, 3 in
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:55 compute-0 sudo[104320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:55 compute-0 sudo[104320]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:55 compute-0 sudo[104346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:55 compute-0 sudo[104346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:43:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:43:55 compute-0 python3.9[104311]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:43:55 compute-0 sudo[104305]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.602268235 +0000 UTC m=+0.040850139 volume create 29d79d266cc39bf95c8993cfd6612f2cad172b127cec8360714d431d53d0e93e
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.609559506 +0000 UTC m=+0.048141410 container create d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:43:55 compute-0 systemd[1]: Started libpod-conmon-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope.
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.585077039 +0000 UTC m=+0.023658963 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:43:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe834a8e4dd3f0cca394b1bce23940316eea2237a86389cf744fd58ed9c7647/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.719107556 +0000 UTC m=+0.157689500 container init d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.72828835 +0000 UTC m=+0.166870284 container start d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 strange_saha[104412]: 65534 65534
Dec 06 09:43:55 compute-0 systemd[1]: libpod-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope: Deactivated successfully.
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.731971706 +0000 UTC m=+0.170553640 container attach d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.733611184 +0000 UTC m=+0.172193158 container died d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe834a8e4dd3f0cca394b1bce23940316eea2237a86389cf744fd58ed9c7647-merged.mount: Deactivated successfully.
Dec 06 09:43:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v32: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.796957132 +0000 UTC m=+0.235539046 container remove d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=strange_saha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 systemd[1]: libpod-conmon-d711b54a98bbb979190c0d5a88441552d1b35abcf0fe32f6a0b9dc3e408f7d27.scope: Deactivated successfully.
Dec 06 09:43:55 compute-0 podman[104397]: 2025-12-06 09:43:55.801000068 +0000 UTC m=+0.239581982 volume remove 29d79d266cc39bf95c8993cfd6612f2cad172b127cec8360714d431d53d0e93e
Dec 06 09:43:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.883324173 +0000 UTC m=+0.052868896 volume create 4fa083cffff371ad2291549d3b09dafbd3a482881401c5129fd56cea005ed736
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.890390847 +0000 UTC m=+0.059935570 container create c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 06 09:43:55 compute-0 systemd[1]: Started libpod-conmon-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope.
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.862865653 +0000 UTC m=+0.032410416 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:43:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e583021cfab7258ce36587c0be6f879760fcc5ed97c287aa838965959860b2b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.985515921 +0000 UTC m=+0.155060694 container init c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.993758659 +0000 UTC m=+0.163303392 container start c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 elastic_nobel[104458]: 65534 65534
Dec 06 09:43:55 compute-0 systemd[1]: libpod-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope: Deactivated successfully.
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.997647151 +0000 UTC m=+0.167191884 container attach c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:55 compute-0 podman[104429]: 2025-12-06 09:43:55.998278579 +0000 UTC m=+0.167823322 container died c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e583021cfab7258ce36587c0be6f879760fcc5ed97c287aa838965959860b2b-merged.mount: Deactivated successfully.
Dec 06 09:43:56 compute-0 podman[104429]: 2025-12-06 09:43:56.034513854 +0000 UTC m=+0.204058577 container remove c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elastic_nobel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:56 compute-0 podman[104429]: 2025-12-06 09:43:56.038174051 +0000 UTC m=+0.207718794 volume remove 4fa083cffff371ad2291549d3b09dafbd3a482881401c5129fd56cea005ed736
Dec 06 09:43:56 compute-0 systemd[1]: libpod-conmon-c1b652bc13e26b407f02b5cad26004c17d521fcdc9eb13d00be5bc37bdb42050.scope: Deactivated successfully.
Dec 06 09:43:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:56 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:56 compute-0 sudo[104550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtncclbovddmnurmdouvxyajsdxkuhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014234.5646703-333-83359031135156/AnsiballZ_dnf.py'
Dec 06 09:43:56 compute-0 sudo[104550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:56 compute-0 ceph-mon[74327]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 06 09:43:56 compute-0 ceph-mon[74327]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 06 09:43:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[97259]: ts=2025-12-06T09:43:56.321Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec 06 09:43:56 compute-0 podman[104565]: 2025-12-06 09:43:56.331840863 +0000 UTC m=+0.060479536 container died b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb2a73ca3b14a2c20beb30faadb6ace12cd5adb72f156644e5801ee5b84b2c3c-merged.mount: Deactivated successfully.
Dec 06 09:43:56 compute-0 podman[104565]: 2025-12-06 09:43:56.376394267 +0000 UTC m=+0.105032930 container remove b475766d055cff0f70d7ce61dd24d5c1939b80e781c2c628ce05f8102b0c9b5b (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:56 compute-0 podman[104565]: 2025-12-06 09:43:56.380201988 +0000 UTC m=+0.108840661 volume remove cc9140d1b399a34df664d17bf3d5da457ec5a14a1279788aa2852185673a3bfd
Dec 06 09:43:56 compute-0 bash[104565]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0
Dec 06 09:43:56 compute-0 python3.9[104553]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:43:56 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@alertmanager.compute-0.service: Deactivated successfully.
Dec 06 09:43:56 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:56 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@alertmanager.compute-0.service: Consumed 1.420s CPU time.
Dec 06 09:43:56 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 06 09:43:56 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 06 09:43:56 compute-0 podman[104672]: 2025-12-06 09:43:56.893123725 +0000 UTC m=+0.068454877 volume create df96191fbc5e25dde6954322f5c80fec8b2a1ece9bff16e83ede1b379e193dc2
Dec 06 09:43:56 compute-0 podman[104672]: 2025-12-06 09:43:56.908776796 +0000 UTC m=+0.084107918 container create b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:56 compute-0 podman[104672]: 2025-12-06 09:43:56.869898535 +0000 UTC m=+0.045229657 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec 06 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534b49d6523b540f1172e3c7a1e9796019831d81e6906f4fcfaa0985e2a9f95c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534b49d6523b540f1172e3c7a1e9796019831d81e6906f4fcfaa0985e2a9f95c/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:56 compute-0 podman[104672]: 2025-12-06 09:43:56.999201895 +0000 UTC m=+0.174533087 container init b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:57 compute-0 podman[104672]: 2025-12-06 09:43:57.004098697 +0000 UTC m=+0.179429849 container start b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:43:57 compute-0 bash[104672]: b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641
Dec 06 09:43:57 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.036Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.036Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.048Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.050Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec 06 09:43:57 compute-0 sudo[104346]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:57 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 06 09:43:57 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.107Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.107Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.113Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:57.113Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec 06 09:43:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:57 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec 06 09:43:57 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec 06 09:43:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:57 compute-0 ceph-mon[74327]: pgmap v32: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5/223 objects misplaced (2.242%)
Dec 06 09:43:57 compute-0 ceph-mon[74327]: 10.1d scrub starts
Dec 06 09:43:57 compute-0 ceph-mon[74327]: 10.1d scrub ok
Dec 06 09:43:57 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:57 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:57 compute-0 sudo[104718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:43:57 compute-0 sudo[104718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:57 compute-0 sudo[104718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:43:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:43:57 compute-0 sudo[104747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258
Dec 06 09:43:57 compute-0 sudo[104747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:43:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v33: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 06 09:43:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.824598886 +0000 UTC m=+0.062518494 container create e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 06 09:43:57 compute-0 ceph-osd[82803]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 06 09:43:57 compute-0 systemd[1]: Started libpod-conmon-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope.
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.796754613 +0000 UTC m=+0.034674271 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:43:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.941219591 +0000 UTC m=+0.179139289 container init e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.951155008 +0000 UTC m=+0.189074626 container start e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.954999649 +0000 UTC m=+0.192919257 container attach e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:57 compute-0 magical_feynman[104822]: 472 0
Dec 06 09:43:57 compute-0 systemd[1]: libpod-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope: Deactivated successfully.
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.957657836 +0000 UTC m=+0.195577474 container died e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-53e3c1d3d02db252fde23f1b46ec1038f726088626234ad94d2002df85f270f0-merged.mount: Deactivated successfully.
Dec 06 09:43:57 compute-0 podman[104801]: 2025-12-06 09:43:57.994053396 +0000 UTC m=+0.231973024 container remove e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b (image=quay.io/ceph/grafana:10.4.0, name=magical_feynman, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 systemd[1]: libpod-conmon-e0d8d6a272641fc792df4175a8aa23979f39e19cd82f874c4c46cb7938eae55b.scope: Deactivated successfully.
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.081016594 +0000 UTC m=+0.058053366 container create 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:58 compute-0 systemd[1]: Started libpod-conmon-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope.
Dec 06 09:43:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.054252972 +0000 UTC m=+0.031289794 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.160548128 +0000 UTC m=+0.137584930 container init 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.169212878 +0000 UTC m=+0.146249660 container start 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 nostalgic_moore[104863]: 472 0
Dec 06 09:43:58 compute-0 systemd[1]: libpod-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope: Deactivated successfully.
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.173494322 +0000 UTC m=+0.150531124 container attach 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.174148271 +0000 UTC m=+0.151185073 container died 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-40cd99596f7ba43ebf5b2afe178e68d864e1bf59405efc44c8e2b54688fb35f4-merged.mount: Deactivated successfully.
Dec 06 09:43:58 compute-0 podman[104844]: 2025-12-06 09:43:58.224156444 +0000 UTC m=+0.201193216 container remove 69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7 (image=quay.io/ceph/grafana:10.4.0, name=nostalgic_moore, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 systemd[1]: libpod-conmon-69cf27de02ef24ba2e3f9faff6add8ad82268b019eae0cc06db52e26abdecbe7.scope: Deactivated successfully.
Dec 06 09:43:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 06 09:43:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 09:43:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 06 09:43:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 06 09:43:58 compute-0 ceph-mon[74327]: 6.d scrub starts
Dec 06 09:43:58 compute-0 ceph-mon[74327]: 6.d scrub ok
Dec 06 09:43:58 compute-0 ceph-mon[74327]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 06 09:43:58 compute-0 ceph-mon[74327]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 06 09:43:58 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 09:43:58 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=106 pruub=13.671803474s) [2] r=-1 lpr=106 pi=[86,106)/1 crt=51'1027 mlcod 0'0 active pruub 274.329833984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=106 pruub=13.671212196s) [2] r=-1 lpr=106 pi=[86,106)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 274.329833984s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=13.179901123s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=51'1027 mlcod 0'0 active pruub 273.838836670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=13.179830551s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 273.838836670s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=server t=2025-12-06T09:43:58.516369113Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=ticker t=2025-12-06T09:43:58.516541678Z level=info msg=stopped last_tick=2025-12-06T09:43:50Z
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=tracing t=2025-12-06T09:43:58.516624821Z level=info msg="Closing tracing"
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[97786]: logger=grafana-apiserver t=2025-12-06T09:43:58.516792565Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec 06 09:43:58 compute-0 podman[104923]: 2025-12-06 09:43:58.53702808 +0000 UTC m=+0.058652263 container died cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-62646ffda72f68277eee1ddb53fbcad0d452c3540e217585dbd2633e8332ac48-merged.mount: Deactivated successfully.
Dec 06 09:43:58 compute-0 podman[104923]: 2025-12-06 09:43:58.579525706 +0000 UTC m=+0.101149879 container remove cf4c3ab223ccab5449a54ab666c56f3b34eab35d7e3fb2f84c99b865ca2fcfb2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:58 compute-0 bash[104923]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:43:58 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@grafana.compute-0.service: Deactivated successfully.
Dec 06 09:43:58 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:58 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@grafana.compute-0.service: Consumed 4.949s CPU time.
Dec 06 09:43:58 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:43:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:43:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 06 09:43:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 06 09:43:58 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=86/87 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:58 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=85/86 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:43:59 compute-0 podman[105031]: 2025-12-06 09:43:59.023603527 +0000 UTC m=+0.065762488 container create fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:43:59.051Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000611495s
Dec 06 09:43:59 compute-0 podman[105031]: 2025-12-06 09:43:58.989273876 +0000 UTC m=+0.031432927 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec 06 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5c6cd98788ce1db69298fdd871fee591f9145ae1f808ebeb8ae8a42a3e31ed/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:59 compute-0 podman[105031]: 2025-12-06 09:43:59.101122563 +0000 UTC m=+0.143281544 container init fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:59 compute-0 podman[105031]: 2025-12-06 09:43:59.113712497 +0000 UTC m=+0.155871458 container start fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:43:59 compute-0 bash[105031]: fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3
Dec 06 09:43:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:59 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:43:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:43:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:59 compute-0 sudo[104747]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:59 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 09:43:59 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:59 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 09:43:59 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:43:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: pgmap v33: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:43:59 compute-0 ceph-mon[74327]: 10.7 scrub starts
Dec 06 09:43:59 compute-0 ceph-mon[74327]: 10.7 scrub ok
Dec 06 09:43:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 09:43:59 compute-0 ceph-mon[74327]: osdmap e106: 3 total, 3 up, 3 in
Dec 06 09:43:59 compute-0 ceph-mon[74327]: osdmap e107: 3 total, 3 up, 3 in
Dec 06 09:43:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:43:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 09:43:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:43:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:43:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:43:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:43:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.336511944Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-06T09:43:59Z
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337129252Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337158152Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337168253Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337176893Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337185533Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337193863Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337202414Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337211314Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337222294Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337231344Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337239715Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337250485Z level=info msg=Target target=[all]
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337267965Z level=info msg="Path Home" path=/usr/share/grafana
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337276326Z level=info msg="Path Data" path=/var/lib/grafana
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337283996Z level=info msg="Path Logs" path=/var/log/grafana
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337291616Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337299796Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=settings t=2025-12-06T09:43:59.337308487Z level=info msg="App mode production"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.337935666Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.337979547Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=migrator t=2025-12-06T09:43:59.339338635Z level=info msg="Starting DB migrations"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=migrator t=2025-12-06T09:43:59.374220432Z level=info msg="migrations completed" performed=0 skipped=547 duration=903.967µs
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore t=2025-12-06T09:43:59.375870079Z level=info msg="Created default organization"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=secrets t=2025-12-06T09:43:59.377284231Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugin.store t=2025-12-06T09:43:59.403575419Z level=info msg="Loading plugins..."
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=local.finder t=2025-12-06T09:43:59.491227327Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugin.store t=2025-12-06T09:43:59.491295889Z level=info msg="Plugins loaded" count=55 duration=87.72101ms
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=query_data t=2025-12-06T09:43:59.497184279Z level=info msg="Query Service initialization"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=live.push_http t=2025-12-06T09:43:59.502282636Z level=info msg="Live Push Gateway initialization"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.migration t=2025-12-06T09:43:59.506386245Z level=info msg=Starting
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.524453596Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats.collector t=2025-12-06T09:43:59.527835483Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.datasources t=2025-12-06T09:43:59.532945241Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.alerting t=2025-12-06T09:43:59.562793142Z level=info msg="starting to provision alerting"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.alerting t=2025-12-06T09:43:59.562825053Z level=info msg="finished to provision alerting"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.562949147Z level=info msg="Warming state cache for startup"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.state.manager t=2025-12-06T09:43:59.563561284Z level=info msg="State cache has been initialized" states=0 duration=611.788µs
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.multiorg.alertmanager t=2025-12-06T09:43:59.563654017Z level=info msg="Starting MultiOrg Alertmanager"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ngalert.scheduler t=2025-12-06T09:43:59.563698708Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=ticker t=2025-12-06T09:43:59.56377418Z level=info msg=starting first_tick=2025-12-06T09:44:00Z
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafanaStorageLogger t=2025-12-06T09:43:59.56688829Z level=info msg="Storage starting"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=http.server t=2025-12-06T09:43:59.571645517Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=http.server t=2025-12-06T09:43:59.572370578Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.dashboard t=2025-12-06T09:43:59.613143035Z level=info msg="starting to provision dashboards"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T09:43:59.625985605Z level=info msg="Update check succeeded" duration=62.040069ms
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=provisioning.dashboard t=2025-12-06T09:43:59.646783395Z level=info msg="finished to provision dashboards"
Dec 06 09:43:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T09:43:59.66465165Z level=info msg="Update check succeeded" duration=101.083516ms
Dec 06 09:43:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v36: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5 B/s, 0 objects/s recovering
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 06 09:43:59 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 06 09:43:59 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=108 pruub=13.148490906s) [2] r=-1 lpr=108 pi=[58,108)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 275.312438965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:43:59 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=108 pruub=13.148456573s) [2] r=-1 lpr=108 pi=[58,108)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 275.312438965s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:43:59 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=85/85 les/c/f=86/86/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[85,107)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:59 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 108 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=86/86 les/c/f=87/87/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[86,107)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:43:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:44:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana-apiserver t=2025-12-06T09:44:00.098588839Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec 06 09:44:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana-apiserver t=2025-12-06T09:44:00.099196286Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec 06 09:44:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:44:00 compute-0 ceph-mon[74327]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 09:44:00 compute-0 ceph-mon[74327]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 09:44:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 06 09:44:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 06 09:44:00 compute-0 ceph-mon[74327]: osdmap e108: 3 total, 3 up, 3 in
Dec 06 09:44:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:00 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 06 09:44:00 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 06 09:44:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 06 09:44:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 09:44:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:00 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:00 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec 06 09:44:00 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec 06 09:44:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 06 09:44:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 06 09:44:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=107/86 les/c/f=108/87/0 sis=109 pruub=14.998575211s) [2] async=[2] r=-1 lpr=109 pi=[86,109)/1 crt=51'1027 mlcod 51'1027 active pruub 278.167938232s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=7 ec=58/45 lis/c=107/86 les/c/f=108/87/0 sis=109 pruub=14.997790337s) [2] r=-1 lpr=109 pi=[86,109)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 278.167938232s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=14.995147705s) [2] async=[2] r=-1 lpr=109 pi=[85,109)/1 crt=51'1027 mlcod 51'1027 active pruub 278.166046143s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=107/108 n=5 ec=58/45 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=14.995041847s) [2] r=-1 lpr=109 pi=[85,109)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 278.166046143s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:00 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 109 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=58/59 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:44:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:44:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:44:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:01.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:01 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 09:44:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 09:44:01 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 09:44:01 compute-0 ceph-mon[74327]: pgmap v36: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5 B/s, 0 objects/s recovering
Dec 06 09:44:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:01 compute-0 ceph-mon[74327]: Reconfiguring osd.0 (monmap changed)...
Dec 06 09:44:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mon[74327]: Reconfiguring daemon osd.0 on compute-1
Dec 06 09:44:01 compute-0 ceph-mon[74327]: osdmap e109: 3 total, 3 up, 3 in
Dec 06 09:44:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v39: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 06 09:44:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 06 09:44:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 06 09:44:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 06 09:44:02 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 110 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=58/58 les/c/f=59/59/0 sis=109) [2]/[1] async=[2] r=0 lpr=109 pi=[58,109)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:44:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:02 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 09:44:02 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:02 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 09:44:02 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:02 compute-0 ceph-mon[74327]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 09:44:02 compute-0 ceph-mon[74327]: pgmap v39: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 06 09:44:02 compute-0 ceph-mon[74327]: osdmap e110: 3 total, 3 up, 3 in
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:44:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:44:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 06 09:44:03 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 111 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=109/58 les/c/f=110/59/0 sis=111 pruub=14.630587578s) [2] async=[2] r=-1 lpr=111 pi=[58,111)/1 crt=51'1027 lcod 0'0 mlcod 0'0 active pruub 280.370971680s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:03 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 111 pg[10.10( v 51'1027 (0'0,51'1027] local-lis/les=109/110 n=2 ec=58/45 lis/c=109/58 les/c/f=110/59/0 sis=111 pruub=14.630526543s) [2] r=-1 lpr=111 pi=[58,111)/1 crt=51'1027 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 280.370971680s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:03 compute-0 ceph-mon[74327]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 09:44:03 compute-0 ceph-mon[74327]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 09:44:03 compute-0 ceph-mon[74327]: osdmap e111: 3 total, 3 up, 3 in
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec 06 09:44:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:03 compute-0 ceph-mgr[74618]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec 06 09:44:03 compute-0 ceph-mgr[74618]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec 06 09:44:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v42: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 06 09:44:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 06 09:44:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO root] Restarting engine...
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STOPPING
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STOPPING
Dec 06 09:44:04 compute-0 sudo[105088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:44:04 compute-0 sudo[105088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:04 compute-0 sudo[105088]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 06 09:44:04 compute-0 sudo[105113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 06 09:44:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 06 09:44:04 compute-0 sudo[105113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STOPPED
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STOPPED
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STARTING
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STARTING
Dec 06 09:44:04 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: Reconfiguring mgr.compute-2.oazbvn (monmap changed)...
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oazbvn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: Reconfiguring daemon mgr.compute-2.oazbvn on compute-2
Dec 06 09:44:04 compute-0 ceph-mon[74327]: pgmap v42: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 06 09:44:04 compute-0 ceph-mon[74327]: osdmap e112: 3 total, 3 up, 3 in
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Serving on http://:::9283
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Serving on http://:::9283
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: [06/Dec/2025:09:44:04] ENGINE Bus STARTED
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.error] [06/Dec/2025:09:44:04] ENGINE Bus STARTED
Dec 06 09:44:04 compute-0 ceph-mgr[74618]: [prometheus INFO root] Engine started.
Dec 06 09:44:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:04 compute-0 sudo[105173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:44:04 compute-0 sudo[105173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:04 compute-0 sudo[105173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:05 compute-0 podman[105245]: 2025-12-06 09:44:05.044914046 +0000 UTC m=+0.086824816 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:44:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:44:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:44:05 compute-0 podman[105245]: 2025-12-06 09:44:05.147550096 +0000 UTC m=+0.189460856 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 09:44:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:44:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:05.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:44:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 06 09:44:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 06 09:44:05 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 06 09:44:05 compute-0 podman[105364]: 2025-12-06 09:44:05.783644508 +0000 UTC m=+0.073792101 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v45: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:44:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 06 09:44:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 06 09:44:05 compute-0 podman[105364]: 2025-12-06 09:44:05.795328855 +0000 UTC m=+0.085476508 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:06 compute-0 podman[105456]: 2025-12-06 09:44:06.271642446 +0000 UTC m=+0.075992073 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:44:06 compute-0 podman[105456]: 2025-12-06 09:44:06.287940686 +0000 UTC m=+0.092290083 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:44:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 06 09:44:06 compute-0 ceph-mon[74327]: osdmap e113: 3 total, 3 up, 3 in
Dec 06 09:44:06 compute-0 ceph-mon[74327]: pgmap v45: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 09:44:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 06 09:44:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 06 09:44:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 06 09:44:06 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 06 09:44:06 compute-0 podman[105521]: 2025-12-06 09:44:06.596297002 +0000 UTC m=+0.070615429 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:44:06 compute-0 podman[105521]: 2025-12-06 09:44:06.607762802 +0000 UTC m=+0.082081229 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:44:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094406 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:44:06 compute-0 podman[105588]: 2025-12-06 09:44:06.903906945 +0000 UTC m=+0.070727011 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20)
Dec 06 09:44:06 compute-0 podman[105588]: 2025-12-06 09:44:06.924075458 +0000 UTC m=+0.090895524 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Dec 06 09:44:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:44:07.054Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003790559s
Dec 06 09:44:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 09:44:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 09:44:07 compute-0 podman[105655]: 2025-12-06 09:44:07.232259929 +0000 UTC m=+0.079415623 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:07 compute-0 podman[105655]: 2025-12-06 09:44:07.273425635 +0000 UTC m=+0.120581319 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 06 09:44:07 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 06 09:44:07 compute-0 ceph-mon[74327]: osdmap e114: 3 total, 3 up, 3 in
Dec 06 09:44:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 06 09:44:07 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 06 09:44:07 compute-0 podman[105727]: 2025-12-06 09:44:07.545652029 +0000 UTC m=+0.074620373 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:44:07 compute-0 podman[105727]: 2025-12-06 09:44:07.760849977 +0000 UTC m=+0.289818301 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:44:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v48: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 0 objects/s recovering
Dec 06 09:44:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:08 compute-0 podman[105839]: 2025-12-06 09:44:08.335613058 +0000 UTC m=+0.101497169 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:08 compute-0 podman[105839]: 2025-12-06 09:44:08.384666574 +0000 UTC m=+0.150550685 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 06 09:44:08 compute-0 sudo[105113]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:08 compute-0 ceph-mon[74327]: osdmap e115: 3 total, 3 up, 3 in
Dec 06 09:44:08 compute-0 ceph-mon[74327]: pgmap v48: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 0 objects/s recovering
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:44:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:44:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:44:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:44:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:44:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:09.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:09 compute-0 sudo[105885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:44:09 compute-0 sudo[105885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:09 compute-0 sudo[105885]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:09 compute-0 sudo[105910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:44:09 compute-0 sudo[105910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:09.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:09 compute-0 ceph-mon[74327]: osdmap e116: 3 total, 3 up, 3 in
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:44:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v50: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.836646232 +0000 UTC m=+0.065631064 container create 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:44:09 compute-0 systemd[1]: Started libpod-conmon-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope.
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.816196622 +0000 UTC m=+0.045181484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.939011425 +0000 UTC m=+0.167996267 container init 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.946365907 +0000 UTC m=+0.175350739 container start 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.950499967 +0000 UTC m=+0.179484829 container attach 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:44:09 compute-0 magical_maxwell[105994]: 167 167
Dec 06 09:44:09 compute-0 systemd[1]: libpod-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope: Deactivated successfully.
Dec 06 09:44:09 compute-0 podman[105978]: 2025-12-06 09:44:09.956605033 +0000 UTC m=+0.185589865 container died 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4aa4c0529a18da4e4e5e88b518ec15d8c325752d837b34cb601be999ceba9a-merged.mount: Deactivated successfully.
Dec 06 09:44:10 compute-0 podman[105978]: 2025-12-06 09:44:10.00260513 +0000 UTC m=+0.231589962 container remove 79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:44:10 compute-0 systemd[1]: libpod-conmon-79357c9d0739325bd8f62eb651d6bb3763e46f98a7fe0387bce2e8d8a28aba7e.scope: Deactivated successfully.
Dec 06 09:44:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 06 09:44:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 06 09:44:10 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 06 09:44:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.243943143 +0000 UTC m=+0.078421584 container create c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:44:10 compute-0 systemd[1]: Started libpod-conmon-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope.
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.213854224 +0000 UTC m=+0.048332715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.359715922 +0000 UTC m=+0.194194413 container init c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.378600347 +0000 UTC m=+0.213078758 container start c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.382546611 +0000 UTC m=+0.217025102 container attach c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:44:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:10 compute-0 romantic_sammet[106033]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:44:10 compute-0 romantic_sammet[106033]: --> All data devices are unavailable
Dec 06 09:44:10 compute-0 systemd[1]: libpod-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope: Deactivated successfully.
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.81757354 +0000 UTC m=+0.652051981 container died c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-404fb9de33c4795d06d8b306877cf9f6ef8b2cd1aa92a7518e0d7b47e4ee5743-merged.mount: Deactivated successfully.
Dec 06 09:44:10 compute-0 podman[106017]: 2025-12-06 09:44:10.87510364 +0000 UTC m=+0.709582031 container remove c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 09:44:10 compute-0 systemd[1]: libpod-conmon-c011261ca63e766b1cd5117201b0fae45ea36c9af6dc79c6ff4957be1c9d3ffb.scope: Deactivated successfully.
Dec 06 09:44:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:10 compute-0 sudo[105910]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:11 compute-0 sudo[106063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:44:11 compute-0 sudo[106063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:11 compute-0 sudo[106063]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 06 09:44:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 06 09:44:11 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 06 09:44:11 compute-0 ceph-mon[74327]: pgmap v50: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec 06 09:44:11 compute-0 ceph-mon[74327]: osdmap e117: 3 total, 3 up, 3 in
Dec 06 09:44:11 compute-0 sudo[106089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:44:11 compute-0 sudo[106089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:11.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:11.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.554605284 +0000 UTC m=+0.051486377 container create cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:44:11 compute-0 systemd[1]: Started libpod-conmon-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope.
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.535919164 +0000 UTC m=+0.032800267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.647568576 +0000 UTC m=+0.144449749 container init cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.659552831 +0000 UTC m=+0.156433924 container start cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.663521045 +0000 UTC m=+0.160402218 container attach cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:44:11 compute-0 romantic_meitner[106174]: 167 167
Dec 06 09:44:11 compute-0 systemd[1]: libpod-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope: Deactivated successfully.
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.672808734 +0000 UTC m=+0.169689847 container died cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5317eb87989b42d36bd195eea2855d8b89587beaebe60ee619d0458b116567a7-merged.mount: Deactivated successfully.
Dec 06 09:44:11 compute-0 podman[106157]: 2025-12-06 09:44:11.723754533 +0000 UTC m=+0.220635656 container remove cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:44:11 compute-0 systemd[1]: libpod-conmon-cf6975042acd2fa19e6387893d216802bffeebf76fff928add049c2cc663fe27.scope: Deactivated successfully.
Dec 06 09:44:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 237 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec 06 09:44:11 compute-0 podman[106203]: 2025-12-06 09:44:11.94063393 +0000 UTC m=+0.059567330 container create 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:44:11 compute-0 systemd[1]: Started libpod-conmon-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope.
Dec 06 09:44:12 compute-0 podman[106203]: 2025-12-06 09:44:11.922811016 +0000 UTC m=+0.041744376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:12 compute-0 podman[106203]: 2025-12-06 09:44:12.056089241 +0000 UTC m=+0.175022701 container init 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:44:12 compute-0 podman[106203]: 2025-12-06 09:44:12.075238083 +0000 UTC m=+0.194171443 container start 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:44:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:12 compute-0 podman[106203]: 2025-12-06 09:44:12.112324073 +0000 UTC m=+0.231257473 container attach 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 09:44:12 compute-0 ceph-mon[74327]: osdmap e118: 3 total, 3 up, 3 in
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]: {
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:     "1": [
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:         {
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "devices": [
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "/dev/loop3"
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             ],
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "lv_name": "ceph_lv0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "lv_size": "21470642176",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "name": "ceph_lv0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "tags": {
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.cluster_name": "ceph",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.crush_device_class": "",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.encrypted": "0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.osd_id": "1",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.type": "block",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.vdo": "0",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:                 "ceph.with_tpm": "0"
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             },
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "type": "block",
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:             "vg_name": "ceph_vg0"
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:         }
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]:     ]
Dec 06 09:44:12 compute-0 stupefied_cerf[106219]: }
Dec 06 09:44:12 compute-0 systemd[1]: libpod-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope: Deactivated successfully.
Dec 06 09:44:12 compute-0 podman[106240]: 2025-12-06 09:44:12.471031132 +0000 UTC m=+0.025987472 container died 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b954f5b6dfc4603852d7e9ad4cb0faeff377408757d2e6cfbc3df8d619d266e-merged.mount: Deactivated successfully.
Dec 06 09:44:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:13.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:13 compute-0 ceph-mon[74327]: pgmap v53: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 237 B/s rd, 0 B/s wr, 0 op/s; 25 B/s, 0 objects/s recovering
Dec 06 09:44:13 compute-0 podman[106240]: 2025-12-06 09:44:13.276411114 +0000 UTC m=+0.831367444 container remove 66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 09:44:13 compute-0 systemd[1]: libpod-conmon-66b6620f24c083ed35045758ed8f5cd0015b40cbd100afec0dc4ab54a21a8bf4.scope: Deactivated successfully.
Dec 06 09:44:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:44:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:44:13 compute-0 sudo[106089]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:13 compute-0 sudo[106264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:44:13 compute-0 sudo[106264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:13 compute-0 sudo[106264]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:13 compute-0 sudo[106291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:44:13 compute-0 sudo[106291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 06 09:44:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 06 09:44:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 06 09:44:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 09:44:13 compute-0 podman[106366]: 2025-12-06 09:44:13.93693823 +0000 UTC m=+0.051092737 container create 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:44:13 compute-0 systemd[1]: Started libpod-conmon-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope.
Dec 06 09:44:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:13.915425509 +0000 UTC m=+0.029580036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:14.013901622 +0000 UTC m=+0.128056199 container init 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:14.019689825 +0000 UTC m=+0.133844342 container start 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:14.023082116 +0000 UTC m=+0.137236723 container attach 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:44:14 compute-0 keen_volhard[106383]: 167 167
Dec 06 09:44:14 compute-0 systemd[1]: libpod-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope: Deactivated successfully.
Dec 06 09:44:14 compute-0 conmon[106383]: conmon 864841bb5d69267caad9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope/container/memory.events
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:14.028445208 +0000 UTC m=+0.142599725 container died 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d3c7249c99ce26d04507b5a3f7206697f441047ddbdf459ec816e682df2b09-merged.mount: Deactivated successfully.
Dec 06 09:44:14 compute-0 podman[106366]: 2025-12-06 09:44:14.07071892 +0000 UTC m=+0.184873427 container remove 864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 09:44:14 compute-0 systemd[1]: libpod-conmon-864841bb5d69267caad98fc5a37fe70a6fee79546cd889f3e9822725b097dbe4.scope: Deactivated successfully.
Dec 06 09:44:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:14 compute-0 podman[106406]: 2025-12-06 09:44:14.245558508 +0000 UTC m=+0.054588379 container create 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:44:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 06 09:44:14 compute-0 systemd[1]: Started libpod-conmon-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope.
Dec 06 09:44:14 compute-0 podman[106406]: 2025-12-06 09:44:14.217179875 +0000 UTC m=+0.026209746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:44:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:44:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 06 09:44:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 06 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:44:14 compute-0 podman[106406]: 2025-12-06 09:44:14.34925631 +0000 UTC m=+0.158286211 container init 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:44:14 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 06 09:44:14 compute-0 podman[106406]: 2025-12-06 09:44:14.358836744 +0000 UTC m=+0.167866625 container start 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:44:14 compute-0 podman[106406]: 2025-12-06 09:44:14.363105427 +0000 UTC m=+0.172135298 container attach 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:44:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 06 09:44:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:15 compute-0 lvm[106497]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:44:15 compute-0 lvm[106497]: VG ceph_vg0 finished
Dec 06 09:44:15 compute-0 charming_ritchie[106422]: {}
Dec 06 09:44:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:15.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:15 compute-0 systemd[1]: libpod-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Deactivated successfully.
Dec 06 09:44:15 compute-0 podman[106406]: 2025-12-06 09:44:15.185271912 +0000 UTC m=+0.994301753 container died 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:44:15 compute-0 systemd[1]: libpod-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Consumed 1.349s CPU time.
Dec 06 09:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-86163c71c3ef66d222140f509d688f133c012e5e3cc89276ee76f6f5bcc8ae94-merged.mount: Deactivated successfully.
Dec 06 09:44:15 compute-0 podman[106406]: 2025-12-06 09:44:15.239650784 +0000 UTC m=+1.048680625 container remove 969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:44:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:15 compute-0 systemd[1]: libpod-conmon-969096abe6bab6aa6ec2ba85a37d2ba5e3e3fbaa8910aba35dbfc59178b36fc9.scope: Deactivated successfully.
Dec 06 09:44:15 compute-0 sudo[106291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:44:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:44:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 06 09:44:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 06 09:44:15 compute-0 sudo[106512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:44:15 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 06 09:44:15 compute-0 sudo[106512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:15 compute-0 sudo[106512]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:15 compute-0 ceph-mon[74327]: pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 06 09:44:15 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 06 09:44:15 compute-0 ceph-mon[74327]: osdmap e119: 3 total, 3 up, 3 in
Dec 06 09:44:15 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:15 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:44:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Dec 06 09:44:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 06 09:44:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 09:44:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 06 09:44:16 compute-0 ceph-mon[74327]: osdmap e120: 3 total, 3 up, 3 in
Dec 06 09:44:16 compute-0 ceph-mon[74327]: pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Dec 06 09:44:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 09:44:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 09:44:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 06 09:44:16 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 06 09:44:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 06 09:44:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 09:44:17 compute-0 ceph-mon[74327]: osdmap e121: 3 total, 3 up, 3 in
Dec 06 09:44:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 06 09:44:17 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 06 09:44:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec 06 09:44:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 06 09:44:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 06 09:44:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003d80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 06 09:44:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 06 09:44:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 06 09:44:18 compute-0 ceph-mon[74327]: osdmap e122: 3 total, 3 up, 3 in
Dec 06 09:44:18 compute-0 ceph-mon[74327]: pgmap v60: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec 06 09:44:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 06 09:44:18 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 06 09:44:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:19.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec 06 09:44:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 06 09:44:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 09:44:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 06 09:44:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:44:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:21.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:44:21 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 06 09:44:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 06 09:44:21 compute-0 ceph-mon[74327]: osdmap e123: 3 total, 3 up, 3 in
Dec 06 09:44:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 06 09:44:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 09:44:21 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 09:44:21 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 06 09:44:21 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 06 09:44:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:22 compute-0 ceph-mon[74327]: pgmap v62: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Dec 06 09:44:22 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 09:44:22 compute-0 ceph-mon[74327]: pgmap v63: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 06 09:44:22 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 09:44:22 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 09:44:22 compute-0 ceph-mon[74327]: osdmap e124: 3 total, 3 up, 3 in
Dec 06 09:44:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 06 09:44:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 09:44:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 06 09:44:22 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 06 09:44:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:23.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:44:23
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['vms', '.nfs', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'default.rgw.control']
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:44:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 06 09:44:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 06 09:44:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 09:44:23 compute-0 ceph-mon[74327]: osdmap e125: 3 total, 3 up, 3 in
Dec 06 09:44:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:44:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:44:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 06 09:44:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 06 09:44:23 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 06 09:44:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:44:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:44:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:44:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:44:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:44:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:44:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:44:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:44:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:24 compute-0 sudo[106570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:44:24 compute-0 sudo[106570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:24 compute-0 sudo[106570]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:24 compute-0 ceph-mon[74327]: pgmap v66: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:44:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 06 09:44:24 compute-0 ceph-mon[74327]: osdmap e126: 3 total, 3 up, 3 in
Dec 06 09:44:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:25.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:44:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 06 09:44:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 06 09:44:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 06 09:44:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 06 09:44:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 06 09:44:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 06 09:44:25 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 06 09:44:25 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 127 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=127 pruub=14.457291603s) [0] r=-1 lpr=127 pi=[89,127)/1 crt=51'1027 mlcod 0'0 active pruub 302.696716309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:25 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 127 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=127 pruub=14.457220078s) [0] r=-1 lpr=127 pi=[89,127)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 302.696716309s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 06 09:44:26 compute-0 ceph-mon[74327]: pgmap v68: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:44:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 06 09:44:26 compute-0 ceph-mon[74327]: osdmap e127: 3 total, 3 up, 3 in
Dec 06 09:44:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 06 09:44:27 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 06 09:44:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 128 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:27 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 128 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=89/90 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 207 B/s rd, 0 op/s
Dec 06 09:44:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 06 09:44:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 06 09:44:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 06 09:44:28 compute-0 ceph-mon[74327]: osdmap e128: 3 total, 3 up, 3 in
Dec 06 09:44:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 129 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=89/89 les/c/f=90/90/0 sis=128) [0]/[1] async=[0] r=0 lpr=128 pi=[89,128)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:44:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 06 09:44:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 06 09:44:28 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 06 09:44:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 130 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=128/89 les/c/f=129/90/0 sis=130 pruub=15.355307579s) [0] async=[0] r=-1 lpr=130 pi=[89,130)/1 crt=51'1027 mlcod 51'1027 active pruub 306.535247803s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:28 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 130 pg[10.19( v 51'1027 (0'0,51'1027] local-lis/les=128/129 n=7 ec=58/45 lis/c=128/89 les/c/f=129/90/0 sis=130 pruub=15.355225563s) [0] r=-1 lpr=130 pi=[89,130)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 306.535247803s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:29 compute-0 ceph-mon[74327]: pgmap v71: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 207 B/s rd, 0 op/s
Dec 06 09:44:29 compute-0 ceph-mon[74327]: osdmap e129: 3 total, 3 up, 3 in
Dec 06 09:44:29 compute-0 ceph-mon[74327]: osdmap e130: 3 total, 3 up, 3 in
Dec 06 09:44:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:44:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 06 09:44:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 06 09:44:29 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 06 09:44:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:30 compute-0 ceph-mon[74327]: pgmap v74: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:44:30 compute-0 ceph-mon[74327]: osdmap e131: 3 total, 3 up, 3 in
Dec 06 09:44:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:31 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:31.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 213 B/s rd, 0 op/s
Dec 06 09:44:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:32 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:33 compute-0 ceph-mon[74327]: pgmap v76: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 213 B/s rd, 0 op/s
Dec 06 09:44:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:33.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:33 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:33.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 06 09:44:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 06 09:44:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 06 09:44:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 06 09:44:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 06 09:44:34 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 06 09:44:34 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 06 09:44:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:34 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094434 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:44:35 compute-0 ceph-mon[74327]: pgmap v77: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 06 09:44:35 compute-0 ceph-mon[74327]: osdmap e132: 3 total, 3 up, 3 in
Dec 06 09:44:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:35 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 294 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec 06 09:44:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 06 09:44:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 06 09:44:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 06 09:44:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 06 09:44:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 06 09:44:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 06 09:44:36 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 06 09:44:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=133 pruub=13.667246819s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1027 mlcod 0'0 active pruub 312.198425293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:36 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=133 pruub=13.667204857s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 312.198425293s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:36 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 06 09:44:37 compute-0 ceph-mon[74327]: pgmap v79: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 294 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec 06 09:44:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 06 09:44:37 compute-0 ceph-mon[74327]: osdmap e133: 3 total, 3 up, 3 in
Dec 06 09:44:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:37 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 06 09:44:37 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 06 09:44:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 134 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:37 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 134 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=97/98 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 06 09:44:38 compute-0 ceph-mon[74327]: osdmap e134: 3 total, 3 up, 3 in
Dec 06 09:44:38 compute-0 ceph-mon[74327]: pgmap v82: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 06 09:44:38 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 06 09:44:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:38 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:44:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:39 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 135 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=97/97 les/c/f=98/98/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[97,134)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:44:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 06 09:44:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 06 09:44:39 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 06 09:44:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 136 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=134/97 les/c/f=135/98/0 sis=136 pruub=15.714330673s) [0] async=[0] r=-1 lpr=136 pi=[97,136)/1 crt=51'1027 mlcod 51'1027 active pruub 317.670806885s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:39 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 136 pg[10.1b( v 51'1027 (0'0,51'1027] local-lis/les=134/135 n=2 ec=58/45 lis/c=134/97 les/c/f=135/98/0 sis=136 pruub=15.714257240s) [0] r=-1 lpr=136 pi=[97,136)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 317.670806885s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:39 compute-0 ceph-mon[74327]: osdmap e135: 3 total, 3 up, 3 in
Dec 06 09:44:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 06 09:44:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 06 09:44:40 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 06 09:44:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:40 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:40 compute-0 ceph-mon[74327]: osdmap e136: 3 total, 3 up, 3 in
Dec 06 09:44:40 compute-0 ceph-mon[74327]: pgmap v85: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:41 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:41 compute-0 sudo[104550]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v87: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:41 compute-0 ceph-mon[74327]: osdmap e137: 3 total, 3 up, 3 in
Dec 06 09:44:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:42 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:43 compute-0 ceph-mon[74327]: pgmap v87: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:43 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:43 compute-0 sudo[106770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tglgdyopjsunjzuzgpzfcmvrbjediipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014282.989746-369-147300997262041/AnsiballZ_command.py'
Dec 06 09:44:43 compute-0 sudo[106770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:43 compute-0 python3.9[106772]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Dec 06 09:44:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:44:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 06 09:44:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 06 09:44:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 06 09:44:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 06 09:44:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 06 09:44:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 06 09:44:44 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 06 09:44:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:44 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:44 compute-0 sudo[106908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:44:44 compute-0 sudo[106908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:44:44 compute-0 sudo[106908]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:45 compute-0 sudo[106770]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:45 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:45 compute-0 ceph-mon[74327]: pgmap v88: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s; 18 B/s, 0 objects/s recovering
Dec 06 09:44:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 06 09:44:45 compute-0 ceph-mon[74327]: osdmap e138: 3 total, 3 up, 3 in
Dec 06 09:44:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 829 B/s rd, 165 B/s wr, 1 op/s; 17 B/s, 0 objects/s recovering
Dec 06 09:44:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 06 09:44:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 06 09:44:46 compute-0 sudo[107084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-singvcgtmvgvzjllwmhtlnxreaspzvho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014285.3857665-393-157671963342629/AnsiballZ_selinux.py'
Dec 06 09:44:46 compute-0 sudo[107084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:46 compute-0 python3.9[107086]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 09:44:46 compute-0 sudo[107084]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 06 09:44:46 compute-0 ceph-mon[74327]: pgmap v90: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 829 B/s rd, 165 B/s wr, 1 op/s; 17 B/s, 0 objects/s recovering
Dec 06 09:44:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 06 09:44:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 06 09:44:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 06 09:44:46 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 06 09:44:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:46 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:47 compute-0 sudo[107237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzdszfjndcrzwmxuqbjflhanvevsyjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014286.8525763-426-40938164179199/AnsiballZ_command.py'
Dec 06 09:44:47 compute-0 sudo[107237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:44:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:44:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:47 compute-0 python3.9[107239]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 09:44:47 compute-0 sudo[107237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:47 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:44:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:47.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 06 09:44:47 compute-0 ceph-mon[74327]: osdmap e139: 3 total, 3 up, 3 in
Dec 06 09:44:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 15 B/s, 0 objects/s recovering
Dec 06 09:44:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 06 09:44:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 06 09:44:47 compute-0 sudo[107391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzfaadcnaklrlgwayknqkfjadzmbect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014287.5965407-450-144548253882243/AnsiballZ_file.py'
Dec 06 09:44:47 compute-0 sudo[107391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:48 compute-0 python3.9[107393]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:48 compute-0 sudo[107391]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 06 09:44:48 compute-0 ceph-mon[74327]: pgmap v92: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 15 B/s, 0 objects/s recovering
Dec 06 09:44:48 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 06 09:44:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 06 09:44:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 06 09:44:48 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 06 09:44:48 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=140 pruub=13.577693939s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1027 mlcod 0'0 active pruub 324.368438721s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:48 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=140 pruub=13.577651024s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 324.368438721s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:48 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 06 09:44:48 compute-0 sudo[107543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithqrkuwjyujdgyvutofwxfqeokpungj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014288.3207002-474-202763404616121/AnsiballZ_mount.py'
Dec 06 09:44:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 06 09:44:48 compute-0 sudo[107543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:48 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 06 09:44:48 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 141 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:48 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 141 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=80/81 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:49 compute-0 python3.9[107545]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 09:44:49 compute-0 sudo[107543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:49 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:44:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:44:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 06 09:44:49 compute-0 ceph-mon[74327]: osdmap e140: 3 total, 3 up, 3 in
Dec 06 09:44:49 compute-0 ceph-mon[74327]: osdmap e141: 3 total, 3 up, 3 in
Dec 06 09:44:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.9 KiB/s wr, 6 op/s
Dec 06 09:44:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 06 09:44:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:44:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 06 09:44:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:44:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 06 09:44:49 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 142 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=142) [1] r=0 lpr=142 pi=[109,142)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:49 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 06 09:44:49 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 142 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=80/80 les/c/f=81/81/0 sis=141) [2]/[1] async=[2] r=0 lpr=141 pi=[80,141)/1 crt=51'1027 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:44:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:50 compute-0 sudo[107697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfdeqkrivegcgqfpicyjcalghhxbawpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014290.0148141-558-268403742819027/AnsiballZ_file.py'
Dec 06 09:44:50 compute-0 sudo[107697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:44:50 compute-0 ceph-mon[74327]: pgmap v95: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.9 KiB/s wr, 6 op/s
Dec 06 09:44:50 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 09:44:50 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 09:44:50 compute-0 ceph-mon[74327]: osdmap e142: 3 total, 3 up, 3 in
Dec 06 09:44:50 compute-0 python3.9[107699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:50 compute-0 sudo[107697]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:50 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 06 09:44:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 06 09:44:50 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 06 09:44:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=141/80 les/c/f=142/81/0 sis=143 pruub=15.002619743s) [2] async=[2] r=-1 lpr=143 pi=[80,143)/1 crt=51'1027 mlcod 51'1027 active pruub 328.184936523s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1e( v 51'1027 (0'0,51'1027] local-lis/les=141/142 n=5 ec=58/45 lis/c=141/80 les/c/f=142/81/0 sis=143 pruub=15.002544403s) [2] r=-1 lpr=143 pi=[80,143)/1 crt=51'1027 mlcod 0'0 unknown NOTIFY pruub 328.184936523s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=143) [1]/[2] r=-1 lpr=143 pi=[109,143)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:50 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 143 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/45 lis/c=109/109 les/c/f=110/110/0 sis=143) [1]/[2] r=-1 lpr=143 pi=[109,143)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 09:44:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:44:51 compute-0 sudo[107850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfjzqvonidbxnatgritgvqsskwbmhxdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014290.774892-582-35976293941818/AnsiballZ_stat.py'
Dec 06 09:44:51 compute-0 sudo[107850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:44:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:44:51 compute-0 python3.9[107852]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:44:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:51 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:51 compute-0 sudo[107850]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:51 compute-0 sudo[107929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeouwqvkhtdqnnqdvvqgdeuvditfaayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014290.774892-582-35976293941818/AnsiballZ_file.py'
Dec 06 09:44:51 compute-0 sudo[107929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:51 compute-0 python3.9[107931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:51 compute-0 sudo[107929]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v98: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 06 09:44:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 06 09:44:52 compute-0 ceph-mon[74327]: osdmap e143: 3 total, 3 up, 3 in
Dec 06 09:44:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 06 09:44:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:52 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:53 compute-0 sudo[108082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzrbnqmyqglwxqwuygvmibcjwesbphl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014292.6995506-645-119285152652290/AnsiballZ_stat.py'
Dec 06 09:44:53 compute-0 sudo[108082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 06 09:44:53 compute-0 ceph-mon[74327]: pgmap v98: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 06 09:44:53 compute-0 ceph-mon[74327]: osdmap e144: 3 total, 3 up, 3 in
Dec 06 09:44:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 06 09:44:53 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 09:44:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 145 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 luod=0'0 crt=51'1027 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 06 09:44:53 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 145 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=0/0 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 crt=51'1027 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 09:44:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:53 compute-0 python3.9[108084]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:44:53 compute-0 sudo[108082]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:53 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec 06 09:44:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:44:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f35c6140610>)]
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f35c6140700>)]
Dec 06 09:44:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 09:44:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 06 09:44:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 06 09:44:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 06 09:44:54 compute-0 ceph-osd[82803]: osd.1 pg_epoch: 146 pg[10.1f( v 51'1027 (0'0,51'1027] local-lis/les=145/146 n=5 ec=58/45 lis/c=143/109 les/c/f=144/110/0 sis=145) [1] r=0 lpr=145 pi=[109,145)/1 crt=51'1027 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 09:44:54 compute-0 ceph-mon[74327]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 09:44:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:44:54 compute-0 sudo[108237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtzcznbmkgdyxkfibgztsqvfdmfwyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014293.9350321-684-3459028779265/AnsiballZ_getent.py'
Dec 06 09:44:54 compute-0 sudo[108237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:54 compute-0 python3.9[108239]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 09:44:54 compute-0 sudo[108237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:54 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:55 compute-0 ceph-mon[74327]: pgmap v101: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec 06 09:44:55 compute-0 ceph-mon[74327]: osdmap e146: 3 total, 3 up, 3 in
Dec 06 09:44:55 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : mgrmap e35: compute-0.qhdjwa(active, since 92s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:44:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:55 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:55.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:55 compute-0 sudo[108392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvltfptkoyfgpkcthqbtmqdtwkqynyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014295.0819674-714-176108456593383/AnsiballZ_getent.py'
Dec 06 09:44:55 compute-0 sudo[108392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:55 compute-0 python3.9[108394]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 09:44:55 compute-0 sudo[108392]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 413 B/s rd, 206 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec 06 09:44:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:56 compute-0 ceph-mon[74327]: mgrmap e35: compute-0.qhdjwa(active, since 92s), standbys: compute-1.sauzid, compute-2.oazbvn
Dec 06 09:44:56 compute-0 sudo[108545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjiqsfvuaalzjwfnlsvtbeiyzgwnmvdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014295.8865016-738-61595277776213/AnsiballZ_group.py'
Dec 06 09:44:56 compute-0 sudo[108545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:56 compute-0 python3.9[108547]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:44:56 compute-0 sudo[108545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:56 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094456 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:44:57 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:58224] [POST] [200] [0.148s] [4.0B] [3cb85339-82e1-47be-b992-8a94186ac764] /api/prometheus_receiver
Dec 06 09:44:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:57.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:57 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:57 compute-0 sudo[108702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kglgoukjotrngatfeeflcymbowkmfutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014296.9685578-765-81341617625772/AnsiballZ_file.py'
Dec 06 09:44:57 compute-0 ceph-mon[74327]: pgmap v103: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 413 B/s rd, 206 B/s wr, 1 op/s; 5/224 objects misplaced (2.232%); 0 B/s, 1 objects/s recovering
Dec 06 09:44:57 compute-0 sudo[108702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:44:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:57.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:44:57 compute-0 python3.9[108704]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 09:44:57 compute-0 sudo[108702]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 682 B/s wr, 1 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:58 compute-0 ceph-mon[74327]: pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 682 B/s wr, 1 op/s; 18 B/s, 1 objects/s recovering
Dec 06 09:44:58 compute-0 sudo[108854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfubduzvchtxalrqodgzaxvrdatnaloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014298.2058933-798-160663123656436/AnsiballZ_dnf.py'
Dec 06 09:44:58 compute-0 sudo[108854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:58 compute-0 python3.9[108856]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:44:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:58 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:44:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:44:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:44:59 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:44:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:44:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:44:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:44:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:44:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v105: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 397 B/s rd, 529 B/s wr, 1 op/s; 14 B/s, 1 objects/s recovering
Dec 06 09:45:00 compute-0 sudo[108854]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:00 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:00 compute-0 sudo[109009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drarodmcusrdydjuousavzlcuiqjwlgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014300.526644-822-4640585167338/AnsiballZ_file.py'
Dec 06 09:45:00 compute-0 sudo[109009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Dec 06 09:45:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Dec 06 09:45:01 compute-0 python3.9[109011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:01 compute-0 sudo[109009]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:01 compute-0 ceph-mon[74327]: pgmap v105: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 397 B/s rd, 529 B/s wr, 1 op/s; 14 B/s, 1 objects/s recovering
Dec 06 09:45:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:01 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:01 compute-0 sudo[109163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbqyqrquzoobhzxevoqiwfdatanqhgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014301.2846448-846-181130309209982/AnsiballZ_stat.py'
Dec 06 09:45:01 compute-0 sudo[109163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 117 B/s rd, 353 B/s wr, 0 op/s; 12 B/s, 0 objects/s recovering
Dec 06 09:45:01 compute-0 python3.9[109165]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:01 compute-0 sudo[109163]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c002c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:02 compute-0 sudo[109241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgeanhvsbhkzsqsfnwxclevexwpshjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014301.2846448-846-181130309209982/AnsiballZ_file.py'
Dec 06 09:45:02 compute-0 sudo[109241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:02 compute-0 python3.9[109243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:02 compute-0 sudo[109241]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:02 compute-0 ceph-mon[74327]: pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 117 B/s rd, 353 B/s wr, 0 op/s; 12 B/s, 0 objects/s recovering
Dec 06 09:45:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:02 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:02 compute-0 sudo[109393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbruiisyddmkxucrumjtwesfmoifjszc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014302.6534712-885-121604203468157/AnsiballZ_stat.py'
Dec 06 09:45:02 compute-0 sudo[109393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:03 compute-0 python3.9[109396]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:03 compute-0 sudo[109393]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:03 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:03 compute-0 sudo[109473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgtisfuszgvghvwmsdfmalahzzkvzrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014302.6534712-885-121604203468157/AnsiballZ_file.py'
Dec 06 09:45:03 compute-0 sudo[109473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:03 compute-0 python3.9[109475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:03 compute-0 sudo[109473]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 307 B/s wr, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 06 09:45:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:04 compute-0 sudo[109625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxfzhfcndskoykukgmqksiwoyqvvywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014304.2590044-930-79806952229151/AnsiballZ_dnf.py'
Dec 06 09:45:04 compute-0 sudo[109625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:04 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:04 compute-0 python3.9[109627]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:45:04 compute-0 ceph-mon[74327]: pgmap v107: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 307 B/s wr, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 06 09:45:05 compute-0 sudo[109630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:45:05 compute-0 sudo[109630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:05 compute-0 sudo[109630]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:05 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:05.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 262 B/s rd, 262 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 06 09:45:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:06 compute-0 sudo[109625]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:06 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:06.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:07 compute-0 ceph-mon[74327]: pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 262 B/s rd, 262 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 06 09:45:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:07 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:07 compute-0 python3.9[109806]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:45:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:45:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 06 09:45:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:08 compute-0 python3.9[109959]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 09:45:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats t=2025-12-06T09:45:08.573091254Z level=info msg="Usage stats are ready to report"
Dec 06 09:45:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:08 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228003690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:45:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:08 compute-0 python3.9[110109]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:09 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:09 compute-0 ceph-mon[74327]: pgmap v109: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 255 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 06 09:45:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:10 compute-0 sudo[110262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxvmyrmcypywafyxvxbucemhsjzybvfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014309.537852-1053-275702186763684/AnsiballZ_systemd.py'
Dec 06 09:45:10 compute-0 sudo[110262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:10 compute-0 python3.9[110264]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:45:10 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 09:45:10 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 06 09:45:10 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 09:45:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 09:45:10 compute-0 ceph-mon[74327]: pgmap v110: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:10 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 09:45:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec 06 09:45:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec 06 09:45:10 compute-0 sudo[110262]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:11 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:11 compute-0 python3.9[110427]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 09:45:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92480049e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:12 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:13 compute-0 ceph-mon[74327]: pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:13.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:13 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:45:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:14 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:14 compute-0 sudo[110579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilkysyokcwxrigvoresraibsztamljls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014314.6860933-1224-48121434707203/AnsiballZ_systemd.py'
Dec 06 09:45:14 compute-0 sudo[110579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:15 compute-0 ceph-mon[74327]: pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:45:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:15 compute-0 python3.9[110582]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:45:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:15 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:15 compute-0 sudo[110579]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:15 compute-0 sudo[110709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:15 compute-0 sudo[110709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:15 compute-0 sudo[110709]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:15 compute-0 sudo[110760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aicmgucrkzccrjuvymwxqyaqnmbphnca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014315.4980896-1224-52242040636688/AnsiballZ_systemd.py'
Dec 06 09:45:15 compute-0 sudo[110760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:15 compute-0 sudo[110761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:45:15 compute-0 sudo[110761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:16 compute-0 python3.9[110768]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:45:16 compute-0 sudo[110760]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:16 compute-0 podman[110884]: 2025-12-06 09:45:16.355815321 +0000 UTC m=+0.063097275 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 09:45:16 compute-0 podman[110884]: 2025-12-06 09:45:16.450790491 +0000 UTC m=+0.158072425 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:45:16 compute-0 sshd-session[101434]: Connection closed by 192.168.122.30 port 48974
Dec 06 09:45:16 compute-0 sshd-session[101430]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:45:16 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 06 09:45:16 compute-0 systemd[1]: session-39.scope: Consumed 1min 8.492s CPU time.
Dec 06 09:45:16 compute-0 systemd-logind[795]: Session 39 logged out. Waiting for processes to exit.
Dec 06 09:45:16 compute-0 systemd-logind[795]: Removed session 39.
Dec 06 09:45:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:16 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:16 compute-0 podman[111002]: 2025-12-06 09:45:16.931365192 +0000 UTC m=+0.059583551 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:16 compute-0 podman[111002]: 2025-12-06 09:45:16.938918043 +0000 UTC m=+0.067136352 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:16.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:17 compute-0 ceph-mon[74327]: pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:17 compute-0 podman[111095]: 2025-12-06 09:45:17.300878407 +0000 UTC m=+0.053166182 container exec f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:45:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:17 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9248004b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:17 compute-0 podman[111095]: 2025-12-06 09:45:17.308807227 +0000 UTC m=+0.061094982 container exec_died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:45:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:17 compute-0 podman[111159]: 2025-12-06 09:45:17.582209951 +0000 UTC m=+0.085802797 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:45:17 compute-0 podman[111159]: 2025-12-06 09:45:17.592867304 +0000 UTC m=+0.096460150 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:45:17 compute-0 podman[111225]: 2025-12-06 09:45:17.810969244 +0000 UTC m=+0.053137423 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git)
Dec 06 09:45:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:45:17 compute-0 podman[111225]: 2025-12-06 09:45:17.828980101 +0000 UTC m=+0.071148230 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 06 09:45:18 compute-0 podman[111290]: 2025-12-06 09:45:18.02397101 +0000 UTC m=+0.046290411 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:18 compute-0 podman[111290]: 2025-12-06 09:45:18.044938246 +0000 UTC m=+0.067257637 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:18 compute-0 podman[111365]: 2025-12-06 09:45:18.274189495 +0000 UTC m=+0.055511816 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:45:18 compute-0 podman[111365]: 2025-12-06 09:45:18.446919482 +0000 UTC m=+0.228241783 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:45:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:18 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:18 compute-0 podman[111477]: 2025-12-06 09:45:18.872594006 +0000 UTC m=+0.067569865 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:18 compute-0 podman[111477]: 2025-12-06 09:45:18.917380086 +0000 UTC m=+0.112355955 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:45:18 compute-0 sudo[110761]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:45:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:19 compute-0 sudo[111520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:19 compute-0 sudo[111520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:19 compute-0 sudo[111520]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:19 compute-0 sudo[111545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:45:19 compute-0 sudo[111545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:19 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:45:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094519 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:45:19 compute-0 sudo[111545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:45:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:45:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:45:19 compute-0 sudo[111601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:19 compute-0 sudo[111601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:19 compute-0 sudo[111601]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:19 compute-0 sudo[111626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:45:19 compute-0 sudo[111626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.138152575 +0000 UTC m=+0.040640931 container create 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:45:20 compute-0 systemd[1]: Started libpod-conmon-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope.
Dec 06 09:45:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.12106165 +0000 UTC m=+0.023550016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.216893536 +0000 UTC m=+0.119381902 container init 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.224686192 +0000 UTC m=+0.127174538 container start 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.227638351 +0000 UTC m=+0.130126697 container attach 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:45:20 compute-0 jovial_nobel[111709]: 167 167
Dec 06 09:45:20 compute-0 systemd[1]: libpod-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope: Deactivated successfully.
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.229981783 +0000 UTC m=+0.132470119 container died 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f86bb6b20a460998717dbe826df23c7509eb14f42214d1291577976353c2ac-merged.mount: Deactivated successfully.
Dec 06 09:45:20 compute-0 podman[111693]: 2025-12-06 09:45:20.273651053 +0000 UTC m=+0.176139399 container remove 477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:45:20 compute-0 systemd[1]: libpod-conmon-477586be6417cb1edbf9fa47ec0802d1d41e83e73e773ef8463078f407d5c24c.scope: Deactivated successfully.
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:45:20 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:45:20 compute-0 podman[111733]: 2025-12-06 09:45:20.414284227 +0000 UTC m=+0.039467269 container create 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:45:20 compute-0 systemd[1]: Started libpod-conmon-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope.
Dec 06 09:45:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:20 compute-0 podman[111733]: 2025-12-06 09:45:20.399581837 +0000 UTC m=+0.024764899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:20 compute-0 podman[111733]: 2025-12-06 09:45:20.506938808 +0000 UTC m=+0.132121880 container init 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:45:20 compute-0 podman[111733]: 2025-12-06 09:45:20.515391202 +0000 UTC m=+0.140574244 container start 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 09:45:20 compute-0 podman[111733]: 2025-12-06 09:45:20.518601047 +0000 UTC m=+0.143784109 container attach 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:45:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:20 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec 06 09:45:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Dec 06 09:45:20 compute-0 eloquent_poitras[111749]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:45:20 compute-0 eloquent_poitras[111749]: --> All data devices are unavailable
Dec 06 09:45:20 compute-0 systemd[1]: libpod-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope: Deactivated successfully.
Dec 06 09:45:21 compute-0 podman[111764]: 2025-12-06 09:45:21.002997691 +0000 UTC m=+0.028578290 container died 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c19da158c533f40e17284f27148715ebf949c7f51d479e2e0d005325cb1ba34c-merged.mount: Deactivated successfully.
Dec 06 09:45:21 compute-0 podman[111764]: 2025-12-06 09:45:21.046911128 +0000 UTC m=+0.072491717 container remove 41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:45:21 compute-0 systemd[1]: libpod-conmon-41b20ba51ae2db01e58543ffc392e5df540c12f61216497f0bff36f68765289b.scope: Deactivated successfully.
Dec 06 09:45:21 compute-0 sudo[111626]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:21 compute-0 sudo[111780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:21 compute-0 sudo[111780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:21 compute-0 sudo[111780]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:21.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:21 compute-0 sudo[111805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:45:21 compute-0 sudo[111805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:21 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:21 compute-0 ceph-mon[74327]: pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.657616496 +0000 UTC m=+0.049223819 container create 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:45:21 compute-0 systemd[1]: Started libpod-conmon-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope.
Dec 06 09:45:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.638754754 +0000 UTC m=+0.030362107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.743122876 +0000 UTC m=+0.134730199 container init 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.748607242 +0000 UTC m=+0.140214565 container start 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.752181127 +0000 UTC m=+0.143788460 container attach 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 09:45:21 compute-0 crazy_roentgen[111887]: 167 167
Dec 06 09:45:21 compute-0 systemd[1]: libpod-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope: Deactivated successfully.
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.75531766 +0000 UTC m=+0.146924953 container died 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c673074b7af76dd53150732d4348de36afd92f23ebe08569ec1c0fb69c33f3d4-merged.mount: Deactivated successfully.
Dec 06 09:45:21 compute-0 podman[111871]: 2025-12-06 09:45:21.788625365 +0000 UTC m=+0.180232668 container remove 57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_roentgen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:45:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:21 compute-0 systemd[1]: libpod-conmon-57f2885ef5b3c63cb54b26f4dfee7042457af84bf6b0f867d0870b1492e523d6.scope: Deactivated successfully.
Dec 06 09:45:21 compute-0 podman[111911]: 2025-12-06 09:45:21.952679091 +0000 UTC m=+0.039274204 container create aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:45:21 compute-0 systemd[1]: Started libpod-conmon-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope.
Dec 06 09:45:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:22.025720751 +0000 UTC m=+0.112315884 container init aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:21.93679815 +0000 UTC m=+0.023393283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:22.036857997 +0000 UTC m=+0.123453100 container start aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:22.040601277 +0000 UTC m=+0.127196410 container attach aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 09:45:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:22 compute-0 distracted_jemison[111928]: {
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:     "1": [
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:         {
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "devices": [
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "/dev/loop3"
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             ],
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "lv_name": "ceph_lv0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "lv_size": "21470642176",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "name": "ceph_lv0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "tags": {
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.cluster_name": "ceph",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.crush_device_class": "",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.encrypted": "0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.osd_id": "1",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.type": "block",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.vdo": "0",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:                 "ceph.with_tpm": "0"
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             },
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "type": "block",
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:             "vg_name": "ceph_vg0"
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:         }
Dec 06 09:45:22 compute-0 distracted_jemison[111928]:     ]
Dec 06 09:45:22 compute-0 distracted_jemison[111928]: }
Dec 06 09:45:22 compute-0 systemd[1]: libpod-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope: Deactivated successfully.
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:22.36761195 +0000 UTC m=+0.454207103 container died aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 09:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-64529ca356e6a5ea4f99c2f5175114ada3ee9750663f1772d2583c44d7c68ae6-merged.mount: Deactivated successfully.
Dec 06 09:45:22 compute-0 podman[111911]: 2025-12-06 09:45:22.418965884 +0000 UTC m=+0.505561017 container remove aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:45:22 compute-0 systemd[1]: libpod-conmon-aa986bf0d49b9228ff8069e079b1da6bb3402ea74782d22c6ff9149b224ae2af.scope: Deactivated successfully.
Dec 06 09:45:22 compute-0 sudo[111805]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:22 compute-0 sudo[111949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:22 compute-0 sudo[111949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:22 compute-0 sudo[111949]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:22 compute-0 sudo[111974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:45:22 compute-0 sudo[111974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:22 compute-0 sshd-session[111998]: Accepted publickey for zuul from 192.168.122.30 port 36922 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:45:22 compute-0 systemd-logind[795]: New session 40 of user zuul.
Dec 06 09:45:22 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 06 09:45:22 compute-0 sshd-session[111998]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:45:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:22 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:22 compute-0 podman[112097]: 2025-12-06 09:45:22.985067008 +0000 UTC m=+0.045711285 container create 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:45:23 compute-0 systemd[1]: Started libpod-conmon-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope.
Dec 06 09:45:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:22.969043532 +0000 UTC m=+0.029687829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:23.07027081 +0000 UTC m=+0.130915147 container init 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:23.07664472 +0000 UTC m=+0.137288997 container start 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:23.080472281 +0000 UTC m=+0.141116598 container attach 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:45:23 compute-0 jovial_jang[112114]: 167 167
Dec 06 09:45:23 compute-0 systemd[1]: libpod-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope: Deactivated successfully.
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:23.082022052 +0000 UTC m=+0.142666329 container died 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede4d6a50d2390706b3694e995d68729d3d5a026e76e57b9e1e756f1dd5a40d2-merged.mount: Deactivated successfully.
Dec 06 09:45:23 compute-0 podman[112097]: 2025-12-06 09:45:23.118570533 +0000 UTC m=+0.179214810 container remove 1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:45:23 compute-0 systemd[1]: libpod-conmon-1b48435c51ef45993c7c76155d6a29b828d4d396f6f8ea208371e99a4d8308c6.scope: Deactivated successfully.
Dec 06 09:45:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:23 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:23 compute-0 podman[112163]: 2025-12-06 09:45:23.258282873 +0000 UTC m=+0.024488881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:23 compute-0 podman[112163]: 2025-12-06 09:45:23.441676134 +0000 UTC m=+0.207882132 container create 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:45:23 compute-0 ceph-mon[74327]: pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:45:23 compute-0 systemd[1]: Started libpod-conmon-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope.
Dec 06 09:45:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:23 compute-0 podman[112163]: 2025-12-06 09:45:23.535350021 +0000 UTC m=+0.301556029 container init 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:45:23 compute-0 podman[112163]: 2025-12-06 09:45:23.547359611 +0000 UTC m=+0.313565579 container start 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:45:23 compute-0 podman[112163]: 2025-12-06 09:45:23.55074384 +0000 UTC m=+0.316949818 container attach 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:45:23
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'volumes', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:45:23 compute-0 python3.9[112250]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:45:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:45:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:45:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:45:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:45:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:24 compute-0 lvm[112357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:45:24 compute-0 lvm[112357]: VG ceph_vg0 finished
Dec 06 09:45:24 compute-0 youthful_engelbart[112255]: {}
Dec 06 09:45:24 compute-0 systemd[1]: libpod-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Deactivated successfully.
Dec 06 09:45:24 compute-0 systemd[1]: libpod-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Consumed 1.182s CPU time.
Dec 06 09:45:24 compute-0 podman[112163]: 2025-12-06 09:45:24.327909778 +0000 UTC m=+1.094115756 container died 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:45:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:45:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:45:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:45:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:45:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-30d2974db8a986fd96dcc7c95636657d41b23906f2449e56a081bc8071882242-merged.mount: Deactivated successfully.
Dec 06 09:45:24 compute-0 podman[112163]: 2025-12-06 09:45:24.374082555 +0000 UTC m=+1.140288523 container remove 774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:45:24 compute-0 systemd[1]: libpod-conmon-774a25210f1bbe7fa5bb344d82e3f1f313beab442beddfe5d996caad271b9d93.scope: Deactivated successfully.
Dec 06 09:45:24 compute-0 sudo[111974]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:45:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:45:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.468881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324468907, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2731, "num_deletes": 252, "total_data_size": 7144487, "memory_usage": 7337744, "flush_reason": "Manual Compaction"}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 06 09:45:24 compute-0 sudo[112395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:45:24 compute-0 sudo[112395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:24 compute-0 sudo[112395]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:24 compute-0 ceph-mon[74327]: pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:45:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324536525, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6722042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8213, "largest_seqno": 10943, "table_properties": {"data_size": 6708894, "index_size": 8490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 31688, "raw_average_key_size": 22, "raw_value_size": 6680720, "raw_average_value_size": 4688, "num_data_blocks": 370, "num_entries": 1425, "num_filter_entries": 1425, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014196, "oldest_key_time": 1765014196, "file_creation_time": 1765014324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 67786 microseconds, and 12382 cpu microseconds.
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.536660) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6722042 bytes OK
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.536713) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539334) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539372) EVENT_LOG_v1 {"time_micros": 1765014324539362, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.539398) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7131934, prev total WAL file size 7167784, number of live WAL files 2.
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.542647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6564KB)], [23(12MB)]
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324542682, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 19465119, "oldest_snapshot_seqno": -1}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4122 keys, 14793120 bytes, temperature: kUnknown
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324704689, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14793120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14759382, "index_size": 22300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105124, "raw_average_key_size": 25, "raw_value_size": 14677813, "raw_average_value_size": 3560, "num_data_blocks": 957, "num_entries": 4122, "num_filter_entries": 4122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.704970) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14793120 bytes
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.706605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 12.2 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(5.1) write-amplify(2.2) OK, records in: 4658, records dropped: 536 output_compression: NoCompression
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.706637) EVENT_LOG_v1 {"time_micros": 1765014324706622, "job": 8, "event": "compaction_finished", "compaction_time_micros": 162088, "compaction_time_cpu_micros": 31255, "output_level": 6, "num_output_files": 1, "total_output_size": 14793120, "num_input_records": 4658, "num_output_records": 4122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324708539, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014324712535, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.542582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:45:24.712637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:45:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:24 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92240019e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:24 compute-0 sudo[112522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olhznomtwcjhkocosmxdnxywpjddmtni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014324.4625466-68-253650900572344/AnsiballZ_getent.py'
Dec 06 09:45:24 compute-0 sudo[112522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:25 compute-0 python3.9[112524]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 09:45:25 compute-0 sudo[112522]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:25 compute-0 sudo[112527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:45:25 compute-0 sudo[112527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:25 compute-0 sudo[112527]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:25 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:45:25 compute-0 sudo[112702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feigsiszulsmxbvrjactqwwzwzoqsdwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014325.4810205-104-219118566061136/AnsiballZ_setup.py'
Dec 06 09:45:25 compute-0 sudo[112702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:45:26 compute-0 python3.9[112704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:45:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:26 compute-0 sudo[112702]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:26 compute-0 ceph-mon[74327]: pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:45:26 compute-0 sudo[112786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsjoqniqcdivjpbkkbeimefceugivaat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014325.4810205-104-219118566061136/AnsiballZ_dnf.py'
Dec 06 09:45:26 compute-0 sudo[112786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:26 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:26 compute-0 python3.9[112788]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:45:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:26.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:45:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:27 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:45:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9218004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:28 compute-0 sudo[112786]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:28 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210004370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:28 compute-0 ceph-mon[74327]: pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:29 compute-0 sudo[112942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnzwicdgtdoqaliuwqfcyqyohfiwipin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014328.7908556-146-44638445248831/AnsiballZ_dnf.py'
Dec 06 09:45:29 compute-0 sudo[112942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:29 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:29 compute-0 python3.9[112944]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:45:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:30 compute-0 kernel: ganesha.nfsd[107241]: segfault at 50 ip 00007f92fb6f332e sp 00007f92b3ffe210 error 4 in libntirpc.so.5.8[7f92fb6d8000+2c000] likely on CPU 2 (core 0, socket 2)
Dec 06 09:45:30 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:45:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[95697]: 06/12/2025 09:45:30 : epoch 6933fa72 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f921c004010 fd 48 proxy ignored for local
Dec 06 09:45:30 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 06 09:45:30 compute-0 systemd[1]: Started Process Core Dump (PID 112947/UID 0).
Dec 06 09:45:30 compute-0 sudo[112942]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec 06 09:45:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec 06 09:45:30 compute-0 ceph-mon[74327]: pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:31 compute-0 systemd-coredump[112948]: Process 95701 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 66:
                                                    #0  0x00007f92fb6f332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:45:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:31 compute-0 systemd[1]: systemd-coredump@0-112947-0.service: Deactivated successfully.
Dec 06 09:45:31 compute-0 systemd[1]: systemd-coredump@0-112947-0.service: Consumed 1.070s CPU time.
Dec 06 09:45:31 compute-0 podman[113031]: 2025-12-06 09:45:31.356515381 +0000 UTC m=+0.031199780 container died f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86bdd0ce43374acaba3604e849843759161821197ac361242f7e120fad089e4-merged.mount: Deactivated successfully.
Dec 06 09:45:31 compute-0 podman[113031]: 2025-12-06 09:45:31.398349702 +0000 UTC m=+0.073034091 container remove f137658eeed93d56ee9d8ac7b6445e7acce26a24ed156c5e4e3e69a13e4abbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 09:45:31 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:45:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:31 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:45:31 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.105s CPU time.
Dec 06 09:45:31 compute-0 sudo[113148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxeduolxyelnwxowwygdiuhjecjlgaae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014331.0043192-170-272265111092769/AnsiballZ_systemd.py'
Dec 06 09:45:31 compute-0 sudo[113148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:31 compute-0 python3.9[113151]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:45:31 compute-0 sudo[113148]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:32 compute-0 python3.9[113304]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:45:32 compute-0 ceph-mon[74327]: pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:33 compute-0 sudo[113456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rysoixehfvxswbvzargziipgdohuacjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014333.1391466-224-246996513176256/AnsiballZ_sefcontext.py'
Dec 06 09:45:33 compute-0 sudo[113456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 06 09:45:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:33 compute-0 python3.9[113458]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 09:45:34 compute-0 sudo[113456]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:34 compute-0 python3.9[113608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:45:34 compute-0 ceph-mon[74327]: pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Dec 06 09:45:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:35.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:35 compute-0 sudo[113766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmqfyywphzfvgfhkkjxmxtqrcjplrcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014335.354308-278-177269525409532/AnsiballZ_dnf.py'
Dec 06 09:45:35 compute-0 sudo[113766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Dec 06 09:45:35 compute-0 python3.9[113768]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:45:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094536 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:45:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:36.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:45:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:36.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:36 compute-0 ceph-mon[74327]: pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Dec 06 09:45:37 compute-0 sudo[113766]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:37 compute-0 sudo[113921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswdwqurosdumhkycsfahoiqnsrjsbap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014337.4731019-302-154021649944789/AnsiballZ_command.py'
Dec 06 09:45:37 compute-0 sudo[113921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:38 compute-0 python3.9[113923]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:45:38 compute-0 sudo[113921]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:45:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:39 compute-0 ceph-mon[74327]: pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:39 compute-0 sudo[114210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwkwoliszqzncyjaaklgjuwbjmyqikvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014339.0512314-326-120023760353639/AnsiballZ_file.py'
Dec 06 09:45:39 compute-0 sudo[114210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094539 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:45:39 compute-0 python3.9[114212]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:45:39 compute-0 sudo[114210]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:40 compute-0 python3.9[114362]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:45:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:45:41 compute-0 sudo[114515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhvfhqeqqlwnpeaxdeceksyahelmlxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014340.7459762-374-79466501668540/AnsiballZ_dnf.py'
Dec 06 09:45:41 compute-0 sudo[114515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:41 compute-0 ceph-mon[74327]: pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:41 compute-0 python3.9[114517]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:45:41 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 1.
Dec 06 09:45:41 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:45:41 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.105s CPU time.
Dec 06 09:45:41 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:45:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:41 compute-0 podman[114568]: 2025-12-06 09:45:41.811791382 +0000 UTC m=+0.026448382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:45:42 compute-0 podman[114568]: 2025-12-06 09:45:42.715069851 +0000 UTC m=+0.929726861 container create 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:42 compute-0 podman[114568]: 2025-12-06 09:45:42.801075254 +0000 UTC m=+1.015732244 container init 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:45:42 compute-0 podman[114568]: 2025-12-06 09:45:42.80768185 +0000 UTC m=+1.022338820 container start 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:45:42 compute-0 bash[114568]: 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:45:42 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:45:42 compute-0 ceph-mon[74327]: pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:42 compute-0 sudo[114515]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:45:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:42 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:45:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:43 compute-0 sudo[114776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpjdookzyayqmfdwzvphmzqosfxfdvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014343.1199336-401-126330867971022/AnsiballZ_dnf.py'
Dec 06 09:45:43 compute-0 sudo[114776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:43 compute-0 python3.9[114778]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:45:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:44 compute-0 ceph-mon[74327]: pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 06 09:45:45 compute-0 sudo[114776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:45 compute-0 sudo[114781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:45:45 compute-0 sudo[114781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:45:45 compute-0 sudo[114781]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:45.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:45:45 compute-0 sudo[114956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubfddvbzfltrjlqyechppddupjcqyta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014345.609196-437-147808713212497/AnsiballZ_stat.py'
Dec 06 09:45:45 compute-0 sudo[114956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:46 compute-0 python3.9[114958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:46 compute-0 sudo[114956]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:47 compute-0 sudo[115112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnjhztsudblaktylbmxudjgpejagnlmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014346.6874876-461-10146264935196/AnsiballZ_slurp.py'
Dec 06 09:45:47 compute-0 sudo[115112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:47 compute-0 ceph-mon[74327]: pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:45:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 06 09:45:47 compute-0 python3.9[115114]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 06 09:45:47 compute-0 sudo[115112]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:48 compute-0 ceph-mon[74327]: pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Dec 06 09:45:49 compute-0 sshd-session[112002]: Connection closed by 192.168.122.30 port 36922
Dec 06 09:45:49 compute-0 sshd-session[111998]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:45:49 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 06 09:45:49 compute-0 systemd[1]: session-40.scope: Consumed 18.394s CPU time.
Dec 06 09:45:49 compute-0 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Dec 06 09:45:49 compute-0 systemd-logind[795]: Removed session 40.
Dec 06 09:45:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:49.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:49 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:45:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:49 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:45:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:45:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:45:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:52 compute-0 ceph-mon[74327]: pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:45:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:45:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:45:55 compute-0 ceph-mon[74327]: pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:45:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:45:56 compute-0 ceph-mon[74327]: pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:45:56 compute-0 sshd-session[115147]: Received disconnect from 193.46.255.33 port 24296:11:  [preauth]
Dec 06 09:45:56 compute-0 sshd-session[115147]: Disconnected from authenticating user root 193.46.255.33 port 24296 [preauth]
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:56 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0e4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:45:56.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:45:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:57 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:57 compute-0 sshd-session[115166]: Accepted publickey for zuul from 192.168.122.30 port 34410 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:45:57 compute-0 systemd-logind[795]: New session 41 of user zuul.
Dec 06 09:45:57 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 06 09:45:57 compute-0 sshd-session[115166]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:45:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:45:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:58 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:58 compute-0 ceph-mon[74327]: pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:45:58 compute-0 python3.9[115319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:45:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:45:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:58 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:45:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:45:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:45:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:45:59 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:45:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:45:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:45:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:45:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:45:59 compute-0 python3.9[115475]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:45:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:00 compute-0 ceph-mon[74327]: pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:46:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094600 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:46:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:00 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:00 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 06 09:46:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Dec 06 09:46:01 compute-0 python3.9[115668]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:46:01 compute-0 ceph-mon[74327]: pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:01 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:01 compute-0 sshd-session[115169]: Connection closed by 192.168.122.30 port 34410
Dec 06 09:46:01 compute-0 sshd-session[115166]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:46:01 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 06 09:46:01 compute-0 systemd[1]: session-41.scope: Consumed 2.851s CPU time.
Dec 06 09:46:01 compute-0 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Dec 06 09:46:01 compute-0 systemd-logind[795]: Removed session 41.
Dec 06 09:46:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:02 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:02 compute-0 ceph-mon[74327]: pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:02 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:03.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:03 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:46:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:46:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:04 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:04 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:05 compute-0 ceph-mon[74327]: pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:05 compute-0 sudo[115700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:46:05 compute-0 sudo[115700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:05 compute-0 sudo[115700]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:05 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:06 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:06 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:06.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:46:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:07 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:07 compute-0 ceph-mon[74327]: pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:08 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:08 compute-0 sshd-session[115727]: Accepted publickey for zuul from 192.168.122.30 port 34474 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:46:08 compute-0 systemd-logind[795]: New session 42 of user zuul.
Dec 06 09:46:08 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 06 09:46:08 compute-0 sshd-session[115727]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:46:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:08 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:46:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:09 compute-0 ceph-mon[74327]: pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:09 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:09 compute-0 python3.9[115882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:46:10 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:10 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:10 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:46:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:46:11 compute-0 python3.9[116036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:46:11 compute-0 ceph-mon[74327]: pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:11 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:11 compute-0 sudo[116192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aswyuqiuilqmvcxyeljmdpufvthgxbsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014371.5612638-80-155705051429617/AnsiballZ_setup.py'
Dec 06 09:46:11 compute-0 sudo[116192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:12 compute-0 python3.9[116194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:46:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:12 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:12 compute-0 sudo[116192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:12 compute-0 sudo[116276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqrxisuwmprldsesxeinwxcuyvhpwjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014371.5612638-80-155705051429617/AnsiballZ_dnf.py'
Dec 06 09:46:12 compute-0 sudo[116276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:12 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:13 compute-0 python3.9[116278]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:46:13 compute-0 ceph-mon[74327]: pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:13 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:14 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:14 compute-0 sudo[116276]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:14 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:15 compute-0 sudo[116432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yypcxhooywbhegawlkwvhnpegmcrzgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014374.6775854-116-84376116672868/AnsiballZ_setup.py'
Dec 06 09:46:15 compute-0 sudo[116432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:15 compute-0 ceph-mon[74327]: pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:15.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:15 compute-0 python3.9[116434]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:46:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:15 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:15 compute-0 sudo[116432]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:16 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:16 compute-0 sudo[116628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkzrotetcbkeqcyfgwrbsiyeonruypw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014376.0094702-149-280289764286527/AnsiballZ_file.py'
Dec 06 09:46:16 compute-0 sudo[116628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:16 compute-0 python3.9[116630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:16 compute-0 sudo[116628]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:16 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:16.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:46:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:46:17 compute-0 ceph-mon[74327]: pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:17.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:17 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:17 compute-0 sudo[116782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnclfvucxfiylfhakyepjqglrfyethtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014377.044392-173-149733065582523/AnsiballZ_command.py'
Dec 06 09:46:17 compute-0 sudo[116782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:17 compute-0 python3.9[116784]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:46:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:46:17 compute-0 sudo[116782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:18 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:18 compute-0 sudo[116946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqjteplsuzpvvinlmkxdoelaepcietc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014378.025037-197-135141410503881/AnsiballZ_stat.py'
Dec 06 09:46:18 compute-0 sudo[116946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:18 compute-0 python3.9[116948]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:18 compute-0 sudo[116946]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:18 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:18 compute-0 sudo[117024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrkfzwggifgpdsgoeqqvyvipxcnuwygw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014378.025037-197-135141410503881/AnsiballZ_file.py'
Dec 06 09:46:18 compute-0 sudo[117024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:19 compute-0 python3.9[117026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:19 compute-0 sudo[117024]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:19 compute-0 ceph-mon[74327]: pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:46:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:19.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:19 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:19.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:19 compute-0 sudo[117178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frzkazingdoonljgrbghhyqvcbqtdiih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014379.352444-233-246458134853238/AnsiballZ_stat.py'
Dec 06 09:46:19 compute-0 sudo[117178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:19 compute-0 python3.9[117180]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:19 compute-0 sudo[117178]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:20 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:20 compute-0 sudo[117256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqcglanakkaedupylvjatazxhlohtef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014379.352444-233-246458134853238/AnsiballZ_file.py'
Dec 06 09:46:20 compute-0 sudo[117256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:20 compute-0 python3.9[117258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:20 compute-0 sudo[117256]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:20 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:46:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:46:21 compute-0 ceph-mon[74327]: pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:21.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:21 compute-0 sudo[117410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzkqwbnevwflltzvixcxsovityyicmpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014380.893166-272-200572133212208/AnsiballZ_ini_file.py'
Dec 06 09:46:21 compute-0 sudo[117410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:21 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:21.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:21 compute-0 python3.9[117412]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:21 compute-0 sudo[117410]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:22 compute-0 sudo[117562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqkcvongxaxfkuurynwtvizzncjiyxra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014381.726166-272-110746511732350/AnsiballZ_ini_file.py'
Dec 06 09:46:22 compute-0 sudo[117562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:22 compute-0 python3.9[117564]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:22 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:22 compute-0 sudo[117562]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:22 compute-0 sudo[117714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmopvdozguncvhqttsinjwrtavrecxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014382.447752-272-194513469263317/AnsiballZ_ini_file.py'
Dec 06 09:46:22 compute-0 sudo[117714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:22 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:22 compute-0 python3.9[117716]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:22 compute-0 sudo[117714]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:23.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:23 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:23 compute-0 sudo[117868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swvmzoadzluwnzswsgwtepnvygmfhmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014383.107636-272-65211494405520/AnsiballZ_ini_file.py'
Dec 06 09:46:23 compute-0 sudo[117868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:23 compute-0 ceph-mon[74327]: pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:23 compute-0 python3.9[117870]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:23 compute-0 sudo[117868]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:46:23
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.nfs', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups']
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:46:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:46:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:46:23 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:46:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:24 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:46:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:46:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:46:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:46:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:46:24 compute-0 sudo[118020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqljqlqnsxmxfzirtptsdnhtjkljtir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014383.9617424-365-174025869789275/AnsiballZ_dnf.py'
Dec 06 09:46:24 compute-0 sudo[118020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:24 compute-0 ceph-mon[74327]: pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:24 compute-0 python3.9[118022]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:46:24 compute-0 sudo[118024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:46:24 compute-0 sudo[118024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:24 compute-0 sudo[118024]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:24 compute-0 sudo[118049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:46:24 compute-0 sudo[118049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:24 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:25 compute-0 sudo[118049]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:25 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:46:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:46:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:46:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:46:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:46:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:25 compute-0 sudo[118108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:46:25 compute-0 sudo[118108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:25 compute-0 sudo[118108]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:26 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:46:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:46:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:46:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:46:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:46:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:46:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:46:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:46:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:46:26 compute-0 sudo[118020]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:26 compute-0 sudo[118133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:46:26 compute-0 sudo[118133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:26 compute-0 sudo[118133]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:26 compute-0 sudo[118158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:46:26 compute-0 sudo[118158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:26 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:26.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:46:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.219008304 +0000 UTC m=+0.069373371 container create 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:46:27 compute-0 systemd[1]: Started libpod-conmon-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope.
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.193935472 +0000 UTC m=+0.044300629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.341170159 +0000 UTC m=+0.191535256 container init 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.354798835 +0000 UTC m=+0.205163922 container start 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.3598725 +0000 UTC m=+0.210237567 container attach 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:46:27 compute-0 peaceful_leavitt[118318]: 167 167
Dec 06 09:46:27 compute-0 systemd[1]: libpod-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope: Deactivated successfully.
Dec 06 09:46:27 compute-0 conmon[118318]: conmon 34f836033009bb670541 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope/container/memory.events
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.364926756 +0000 UTC m=+0.215291853 container died 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:46:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:27 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e48c2653377118af497828cca76dabdcdf1dcea2c17e27d9488df8bb277a42-merged.mount: Deactivated successfully.
Dec 06 09:46:27 compute-0 podman[118254]: 2025-12-06 09:46:27.418194144 +0000 UTC m=+0.268559201 container remove 34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:46:27 compute-0 systemd[1]: libpod-conmon-34f836033009bb6705412e0539eeff9fa245d0fd379c29519187969e5c2fa0e2.scope: Deactivated successfully.
Dec 06 09:46:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:27.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:27 compute-0 sudo[118410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctpqcniaskztiqefilmdltzspgtagvta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014387.1777208-398-116434335404729/AnsiballZ_setup.py'
Dec 06 09:46:27 compute-0 sudo[118410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:27 compute-0 podman[118418]: 2025-12-06 09:46:27.599538245 +0000 UTC m=+0.051089330 container create 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:46:27 compute-0 ceph-mon[74327]: pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:46:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:46:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:46:27 compute-0 systemd[1]: Started libpod-conmon-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope.
Dec 06 09:46:27 compute-0 podman[118418]: 2025-12-06 09:46:27.578765329 +0000 UTC m=+0.030316414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:27 compute-0 podman[118418]: 2025-12-06 09:46:27.729321444 +0000 UTC m=+0.180872549 container init 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:46:27 compute-0 podman[118418]: 2025-12-06 09:46:27.736644031 +0000 UTC m=+0.188195096 container start 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:46:27 compute-0 podman[118418]: 2025-12-06 09:46:27.73994634 +0000 UTC m=+0.191497455 container attach 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:46:27 compute-0 python3.9[118412]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:46:27 compute-0 sudo[118410]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:46:28 compute-0 inspiring_nightingale[118435]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:46:28 compute-0 inspiring_nightingale[118435]: --> All data devices are unavailable
Dec 06 09:46:28 compute-0 systemd[1]: libpod-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope: Deactivated successfully.
Dec 06 09:46:28 compute-0 podman[118418]: 2025-12-06 09:46:28.131767974 +0000 UTC m=+0.583319049 container died 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cecda963e8889bee1ad20b1e1ee6479c8e74f8263eec144f8693d6a5d6f3aefe-merged.mount: Deactivated successfully.
Dec 06 09:46:28 compute-0 podman[118418]: 2025-12-06 09:46:28.175765714 +0000 UTC m=+0.627316779 container remove 918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_nightingale, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:46:28 compute-0 systemd[1]: libpod-conmon-918ac1ab0d07190a253038011133e237994ec7174fd9f9a8dbe74dac602a1fc6.scope: Deactivated successfully.
Dec 06 09:46:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:28 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:28 compute-0 sudo[118158]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:28 compute-0 sudo[118637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-modtszfokprfrpoiyaccmqaofvfsbngq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014388.0374398-422-252986303450888/AnsiballZ_stat.py'
Dec 06 09:46:28 compute-0 sudo[118591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:46:28 compute-0 sudo[118637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:28 compute-0 sudo[118591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:28 compute-0 sudo[118591]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:28 compute-0 sudo[118642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:46:28 compute-0 sudo[118642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:28 compute-0 python3.9[118640]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:46:28 compute-0 sudo[118637]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:28 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:28 compute-0 podman[118733]: 2025-12-06 09:46:28.865657379 +0000 UTC m=+0.034711012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:29 compute-0 sudo[118874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjildpjawajhooqognuegdbwdwsjgkwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014389.0038924-449-24985608463211/AnsiballZ_stat.py'
Dec 06 09:46:29 compute-0 sudo[118874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:46:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:29.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:46:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:29 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:46:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:46:29 compute-0 python3.9[118876]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:46:29 compute-0 sudo[118874]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:29 compute-0 podman[118733]: 2025-12-06 09:46:29.778641944 +0000 UTC m=+0.947695477 container create 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:46:29 compute-0 systemd[90433]: Created slice User Background Tasks Slice.
Dec 06 09:46:29 compute-0 systemd[90433]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 09:46:29 compute-0 systemd[90433]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 09:46:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:30 compute-0 sudo[119027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrexervuepoaqfzfeghpwjcjzbhtuye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014389.8382475-479-239999783017802/AnsiballZ_command.py'
Dec 06 09:46:30 compute-0 sudo[119027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:30 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:30 compute-0 systemd[1]: Started libpod-conmon-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope.
Dec 06 09:46:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:30 compute-0 python3.9[119029]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:46:30 compute-0 sudo[119027]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:46:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:46:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:30 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:31 compute-0 sudo[119188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgevncuenfmllucscvsviphdxjotzrpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014390.8461607-509-251390852706948/AnsiballZ_service_facts.py'
Dec 06 09:46:31 compute-0 sudo[119188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:31 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:32 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:32 compute-0 python3.9[119190]: ansible-service_facts Invoked
Dec 06 09:46:32 compute-0 network[119208]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:46:32 compute-0 network[119209]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:46:32 compute-0 network[119210]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:46:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:32 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:33.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:33 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:33 compute-0 podman[118733]: 2025-12-06 09:46:33.644846962 +0000 UTC m=+4.813900615 container init 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 09:46:33 compute-0 podman[118733]: 2025-12-06 09:46:33.660916963 +0000 UTC m=+4.829970506 container start 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:46:33 compute-0 festive_burnell[119033]: 167 167
Dec 06 09:46:33 compute-0 systemd[1]: libpod-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope: Deactivated successfully.
Dec 06 09:46:33 compute-0 podman[118733]: 2025-12-06 09:46:33.816645108 +0000 UTC m=+4.985698671 container attach 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:46:33 compute-0 podman[118733]: 2025-12-06 09:46:33.817624164 +0000 UTC m=+4.986677728 container died 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:46:33 compute-0 ceph-mon[74327]: pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:46:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e056fa3eac6f1bbe8c35fcf8ed15bad3cd3fbd6b6da6d8b4ceabef6c7ee386b9-merged.mount: Deactivated successfully.
Dec 06 09:46:33 compute-0 podman[118733]: 2025-12-06 09:46:33.89171881 +0000 UTC m=+5.060772353 container remove 255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_burnell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 06 09:46:33 compute-0 systemd[1]: libpod-conmon-255b4d575bb2817de43112b20e5f807cdce580038a8485c740567f27922e3442.scope: Deactivated successfully.
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.062245482 +0000 UTC m=+0.051239794 container create 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 09:46:34 compute-0 systemd[1]: Started libpod-conmon-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope.
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.039130883 +0000 UTC m=+0.028125225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:34 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0d0001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.323843165 +0000 UTC m=+0.312837497 container init 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.329553218 +0000 UTC m=+0.318547530 container start 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.335409546 +0000 UTC m=+0.324403868 container attach 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:46:34 compute-0 serene_elion[119292]: {
Dec 06 09:46:34 compute-0 serene_elion[119292]:     "1": [
Dec 06 09:46:34 compute-0 serene_elion[119292]:         {
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "devices": [
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "/dev/loop3"
Dec 06 09:46:34 compute-0 serene_elion[119292]:             ],
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "lv_name": "ceph_lv0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "lv_size": "21470642176",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "name": "ceph_lv0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "tags": {
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.cluster_name": "ceph",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.crush_device_class": "",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.encrypted": "0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.osd_id": "1",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.type": "block",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.vdo": "0",
Dec 06 09:46:34 compute-0 serene_elion[119292]:                 "ceph.with_tpm": "0"
Dec 06 09:46:34 compute-0 serene_elion[119292]:             },
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "type": "block",
Dec 06 09:46:34 compute-0 serene_elion[119292]:             "vg_name": "ceph_vg0"
Dec 06 09:46:34 compute-0 serene_elion[119292]:         }
Dec 06 09:46:34 compute-0 serene_elion[119292]:     ]
Dec 06 09:46:34 compute-0 serene_elion[119292]: }
Dec 06 09:46:34 compute-0 systemd[1]: libpod-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope: Deactivated successfully.
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.61606884 +0000 UTC m=+0.605063152 container died 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86fdaa3ea7485a092cbb949ea56515bb858b58be0cc720c8588d21bc31b5472-merged.mount: Deactivated successfully.
Dec 06 09:46:34 compute-0 podman[119271]: 2025-12-06 09:46:34.710322746 +0000 UTC m=+0.699317058 container remove 4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 09:46:34 compute-0 systemd[1]: libpod-conmon-4b68f33615f531f7879b223b8e9f9db85adca3dd05b2dd25d09f6ec34335413b.scope: Deactivated successfully.
Dec 06 09:46:34 compute-0 sudo[118642]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:34 compute-0 sudo[119319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:46:34 compute-0 sudo[119319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:34 compute-0 sudo[119319]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:34 compute-0 ceph-mon[74327]: pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:34 compute-0 ceph-mon[74327]: pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:34 compute-0 ceph-mon[74327]: pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:34 compute-0 sudo[119344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:46:34 compute-0 sudo[119344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:34 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.220525244 +0000 UTC m=+0.021746863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:35.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:35 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0b8004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.407166288 +0000 UTC m=+0.208387887 container create 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:46:35 compute-0 systemd[1]: Started libpod-conmon-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope.
Dec 06 09:46:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.525173321 +0000 UTC m=+0.326394960 container init 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.53294342 +0000 UTC m=+0.334165019 container start 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.536982758 +0000 UTC m=+0.338204357 container attach 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:46:35 compute-0 magical_kepler[119442]: 167 167
Dec 06 09:46:35 compute-0 systemd[1]: libpod-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope: Deactivated successfully.
Dec 06 09:46:35 compute-0 conmon[119442]: conmon 601ba157cc55cc25a746 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope/container/memory.events
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.539954667 +0000 UTC m=+0.341176266 container died 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-611793e3e6e0182aead80eaef4dee6fee38d78a3d6e81192beed2f617d4eb8d8-merged.mount: Deactivated successfully.
Dec 06 09:46:35 compute-0 podman[119411]: 2025-12-06 09:46:35.845949351 +0000 UTC m=+0.647170980 container remove 601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:46:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:35 compute-0 systemd[1]: libpod-conmon-601ba157cc55cc25a746ee084d06b640eba0d281fd810490850ce8f90825cd6d.scope: Deactivated successfully.
Dec 06 09:46:36 compute-0 podman[119491]: 2025-12-06 09:46:36.016444902 +0000 UTC m=+0.025056392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:36 compute-0 kernel: ganesha.nfsd[115161]: segfault at 50 ip 00007fe19309632e sp 00007fe154ff8210 error 4 in libntirpc.so.5.8[7fe19307b000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 06 09:46:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:46:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[114583]: 06/12/2025 09:46:36 : epoch 6933fb46 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0c0003c10 fd 38 proxy ignored for local
Dec 06 09:46:36 compute-0 podman[119491]: 2025-12-06 09:46:36.219852985 +0000 UTC m=+0.228464485 container create 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 09:46:36 compute-0 systemd[1]: Started Process Core Dump (PID 119517/UID 0).
Dec 06 09:46:36 compute-0 systemd[1]: Started libpod-conmon-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope.
Dec 06 09:46:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:36 compute-0 podman[119491]: 2025-12-06 09:46:36.455590205 +0000 UTC m=+0.464201745 container init 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 09:46:36 compute-0 podman[119491]: 2025-12-06 09:46:36.465045299 +0000 UTC m=+0.473656759 container start 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 09:46:36 compute-0 podman[119491]: 2025-12-06 09:46:36.60724274 +0000 UTC m=+0.615854390 container attach 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 09:46:36 compute-0 sudo[119188]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:46:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:46:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:36.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:46:37 compute-0 lvm[119638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:46:37 compute-0 lvm[119638]: VG ceph_vg0 finished
Dec 06 09:46:37 compute-0 hopeful_chaplygin[119523]: {}
Dec 06 09:46:37 compute-0 lvm[119643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:46:37 compute-0 lvm[119643]: VG ceph_vg0 finished
Dec 06 09:46:37 compute-0 systemd[1]: libpod-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Deactivated successfully.
Dec 06 09:46:37 compute-0 podman[119491]: 2025-12-06 09:46:37.26408806 +0000 UTC m=+1.272699520 container died 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:46:37 compute-0 systemd[1]: libpod-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Consumed 1.208s CPU time.
Dec 06 09:46:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:46:37 compute-0 ceph-mon[74327]: pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:46:37 compute-0 systemd-coredump[119519]: Process 114587 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 52:
                                                    #0  0x00007fe19309632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c101c4c41b1fc102709c26565a9b3ae16e2800a79717e83ca3131c41a761ecf-merged.mount: Deactivated successfully.
Dec 06 09:46:37 compute-0 systemd[1]: systemd-coredump@1-119517-0.service: Deactivated successfully.
Dec 06 09:46:37 compute-0 systemd[1]: systemd-coredump@1-119517-0.service: Consumed 1.162s CPU time.
Dec 06 09:46:38 compute-0 podman[119491]: 2025-12-06 09:46:38.131905155 +0000 UTC m=+2.140516615 container remove 86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chaplygin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:46:38 compute-0 podman[119779]: 2025-12-06 09:46:38.146728812 +0000 UTC m=+0.081630359 container died 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:46:38 compute-0 sudo[119819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpeufxukhoaxlsqgjlepfhigirglelui ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765014397.5353692-554-120080192788384/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765014397.5353692-554-120080192788384/args'
Dec 06 09:46:38 compute-0 sudo[119819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:38 compute-0 sudo[119344]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:46:38 compute-0 sudo[119819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:38 compute-0 sudo[119988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rycmvnroopgwmqdofylmiegdysdpysfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014398.5859659-587-113091879310199/AnsiballZ_dnf.py'
Dec 06 09:46:38 compute-0 sudo[119988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:46:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:39 compute-0 python3.9[119990]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:46:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-aabb0cde1bea459da73db8d32b6287e894193b4e586a1a5b19dc4de874364fbb-merged.mount: Deactivated successfully.
Dec 06 09:46:39 compute-0 ceph-mon[74327]: pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:46:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:40 compute-0 podman[119779]: 2025-12-06 09:46:40.095678141 +0000 UTC m=+2.030579688 container remove 71b960c2881dae640010500027e5d1d1d92a7645608040694f265079ad808565 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:46:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:46:40 compute-0 sudo[119996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:46:40 compute-0 sudo[119996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:40 compute-0 sudo[119996]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:40 compute-0 systemd[1]: libpod-conmon-86214f6a965487650ad5e5dcd9b53ed3ba06dc613070f8ab564086a432b1c000.scope: Deactivated successfully.
Dec 06 09:46:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:46:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.507s CPU time.
Dec 06 09:46:40 compute-0 sudo[119988]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec 06 09:46:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec 06 09:46:41 compute-0 ceph-mon[74327]: pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:46:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:41.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:41 compute-0 sudo[120200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstgrumoiwyrkxcpycqujsletekdbcke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014401.1621928-626-130422429197810/AnsiballZ_package_facts.py'
Dec 06 09:46:41 compute-0 sudo[120200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:42 compute-0 python3.9[120202]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 09:46:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094642 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:46:42 compute-0 sudo[120200]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:43 compute-0 ceph-mon[74327]: pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:43.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:43 compute-0 sudo[120354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthyjsuavntcokmocosmzjaoxcckqgsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014403.2377367-656-171810104986534/AnsiballZ_stat.py'
Dec 06 09:46:43 compute-0 sudo[120354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:43 compute-0 python3.9[120356]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:43 compute-0 sudo[120354]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:43 compute-0 sudo[120432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsmxgwxfeuuitppgwsqnjfvhzykslicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014403.2377367-656-171810104986534/AnsiballZ_file.py'
Dec 06 09:46:44 compute-0 sudo[120432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:44 compute-0 python3.9[120434]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:44 compute-0 sudo[120432]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:44 compute-0 sudo[120584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kklfazadvozsmpdvewvzecopvgpuyxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014404.5394254-692-113911953520547/AnsiballZ_stat.py'
Dec 06 09:46:44 compute-0 sudo[120584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:44 compute-0 python3.9[120586]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:45 compute-0 sudo[120584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:45 compute-0 sudo[120664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfilmsuqiyfbttnjmnwzcijbaqfnuskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014404.5394254-692-113911953520547/AnsiballZ_file.py'
Dec 06 09:46:45 compute-0 sudo[120664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:45 compute-0 python3.9[120666]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:45 compute-0 sudo[120664]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:45 compute-0 ceph-mon[74327]: pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:45 compute-0 sudo[120667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:46:45 compute-0 sudo[120667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:46:45 compute-0 sudo[120667]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:46:47 compute-0 ceph-mon[74327]: pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:46:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:46:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:47 compute-0 sudo[120843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgaprnsukklnepcguzajtysrfvjymetv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014407.029748-746-159499891834810/AnsiballZ_lineinfile.py'
Dec 06 09:46:47 compute-0 sudo[120843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:47 compute-0 python3.9[120845]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:47 compute-0 sudo[120843]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:46:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:49 compute-0 sudo[120996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrqbnnznnfqhtmvugfpdmuakpfbhjui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014408.726714-791-49400091476856/AnsiballZ_setup.py'
Dec 06 09:46:49 compute-0 sudo[120996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:49 compute-0 ceph-mon[74327]: pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:46:49 compute-0 python3.9[120998]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:46:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:49 compute-0 sudo[120996]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:50 compute-0 sudo[121081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrnghscrdzfksjhpbwuljvjcitmymbfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014408.726714-791-49400091476856/AnsiballZ_systemd.py'
Dec 06 09:46:50 compute-0 sudo[121081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:50 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 2.
Dec 06 09:46:50 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:46:50 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.507s CPU time.
Dec 06 09:46:50 compute-0 python3.9[121083]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:46:50 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:46:50 compute-0 sudo[121081]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:50 compute-0 podman[121156]: 2025-12-06 09:46:50.615540684 +0000 UTC m=+0.048280315 container create 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:46:50 compute-0 ceph-mon[74327]: pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:50 compute-0 podman[121156]: 2025-12-06 09:46:50.589669921 +0000 UTC m=+0.022409582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:46:50 compute-0 podman[121156]: 2025-12-06 09:46:50.724577467 +0000 UTC m=+0.157317118 container init 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:46:50 compute-0 podman[121156]: 2025-12-06 09:46:50.730383583 +0000 UTC m=+0.163123214 container start 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:46:50 compute-0 bash[121156]: 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:46:50 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:50 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:46:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec 06 09:46:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48187 "" "Prometheus/2.51.0"
Dec 06 09:46:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:51.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:51 compute-0 sshd-session[115730]: Connection closed by 192.168.122.30 port 34474
Dec 06 09:46:51 compute-0 sshd-session[115727]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:46:51 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 06 09:46:51 compute-0 systemd[1]: session-42.scope: Consumed 25.642s CPU time.
Dec 06 09:46:51 compute-0 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Dec 06 09:46:51 compute-0 systemd-logind[795]: Removed session 42.
Dec 06 09:46:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:53 compute-0 ceph-mon[74327]: pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:46:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:46:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:46:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:46:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:46:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:46:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:55 compute-0 ceph-mon[74327]: pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:56 compute-0 ceph-mon[74327]: pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:46:56 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:46:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:46:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:46:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:58 compute-0 sshd-session[121221]: Accepted publickey for zuul from 192.168.122.30 port 34492 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:46:58 compute-0 systemd-logind[795]: New session 43 of user zuul.
Dec 06 09:46:58 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 06 09:46:58 compute-0 sshd-session[121221]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:46:58 compute-0 sudo[121374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikjgpyvsqxichjhdulkgantivoqlydm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014418.2301013-26-191068213955564/AnsiballZ_file.py'
Dec 06 09:46:58 compute-0 sudo[121374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:58 compute-0 python3.9[121376]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:58 compute-0 sudo[121374]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:46:59 compute-0 ceph-mon[74327]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:46:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:46:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:46:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:46:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:46:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:46:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:46:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:46:59 compute-0 sudo[121528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqkybcjgwvmkzdwstoyxauprdysbztrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014419.2170994-62-268872040853931/AnsiballZ_stat.py'
Dec 06 09:46:59 compute-0 sudo[121528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:59 compute-0 python3.9[121530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:59 compute-0 sudo[121528]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:47:00 compute-0 sudo[121606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgwdjrsbzujfxnhagjbcmrhizbwkbgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014419.2170994-62-268872040853931/AnsiballZ_file.py'
Dec 06 09:47:00 compute-0 sudo[121606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:00 compute-0 python3.9[121608]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:00 compute-0 sudo[121606]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:47:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:47:00 compute-0 sshd-session[121224]: Connection closed by 192.168.122.30 port 34492
Dec 06 09:47:00 compute-0 sshd-session[121221]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:47:00 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 06 09:47:00 compute-0 systemd[1]: session-43.scope: Consumed 1.517s CPU time.
Dec 06 09:47:00 compute-0 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Dec 06 09:47:00 compute-0 systemd-logind[795]: Removed session 43.
Dec 06 09:47:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094700 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:47:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:47:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:47:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:00 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:47:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:01 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:47:01 compute-0 ceph-mon[74327]: pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:47:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 09:47:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 09:47:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:47:02 compute-0 ceph-mon[74327]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:47:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:47:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:05 compute-0 ceph-mon[74327]: pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:47:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:05.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:05.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:05 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:47:06 compute-0 sudo[121639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:47:06 compute-0 sudo[121639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:06 compute-0 sudo[121639]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:06 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:47:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:06.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:47:07 compute-0 sshd-session[121664]: Accepted publickey for zuul from 192.168.122.30 port 52306 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:47:07 compute-0 systemd-logind[795]: New session 44 of user zuul.
Dec 06 09:47:07 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 06 09:47:07 compute-0 sshd-session[121664]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:47:07 compute-0 ceph-mon[74327]: pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:07.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Dec 06 09:47:08 compute-0 python3.9[121819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:47:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:47:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:09 compute-0 sudo[121975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozihmsykhlqykthxbpayrhdbtbdsaula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014428.7414107-59-186969624408355/AnsiballZ_file.py'
Dec 06 09:47:09 compute-0 sudo[121975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:09.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:09 compute-0 python3.9[121977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:09 compute-0 sudo[121975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:09 compute-0 ceph-mon[74327]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Dec 06 09:47:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:10 compute-0 sudo[122150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfcjipwknqhepihuzryhlsxarwcaqxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014429.6553757-83-245819924814759/AnsiballZ_stat.py'
Dec 06 09:47:10 compute-0 sudo[122150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:10 compute-0 python3.9[122152]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:10 compute-0 sudo[122150]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:10 compute-0 sudo[122228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrvnhzvhpeycodjtwbtstlmpqtlfmbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014429.6553757-83-245819924814759/AnsiballZ_file.py'
Dec 06 09:47:10 compute-0 sudo[122228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:10 compute-0 python3.9[122230]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fej78wwi recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:10 compute-0 sudo[122228]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:47:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:47:11 compute-0 ceph-mon[74327]: pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:11 compute-0 sudo[122382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjqbygokrldokgvkgugjgiapkblkezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014431.3171537-143-266240305590961/AnsiballZ_stat.py'
Dec 06 09:47:11 compute-0 sudo[122382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:11 compute-0 python3.9[122384]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:11 compute-0 sudo[122382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:12 compute-0 sudo[122460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flgicwhkrgjnvyfwgtmlbfjjvzinavuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014431.3171537-143-266240305590961/AnsiballZ_file.py'
Dec 06 09:47:12 compute-0 sudo[122460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:12 compute-0 python3.9[122462]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.9w1ifd66 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:12 compute-0 sudo[122460]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:47:12 compute-0 sudo[122624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutitzijhndortkktwhbcephfaufpscs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014432.5726142-182-76574849285939/AnsiballZ_file.py'
Dec 06 09:47:12 compute-0 sudo[122624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:12 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:13 compute-0 python3.9[122626]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:13 compute-0 sudo[122624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:13 compute-0 ceph-mon[74327]: pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 09:47:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:13 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:13 compute-0 sudo[122782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwhiwysedefayhgwtdecjnrvapytrnlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014433.2963276-206-111223313509062/AnsiballZ_stat.py'
Dec 06 09:47:13 compute-0 sudo[122782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:13 compute-0 python3.9[122784]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:13 compute-0 sudo[122782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 06 09:47:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:14 compute-0 sudo[122860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmvkpqaghqkxoeyfklrrisyreeterpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014433.2963276-206-111223313509062/AnsiballZ_file.py'
Dec 06 09:47:14 compute-0 sudo[122860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:14 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:14 compute-0 python3.9[122862]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:14 compute-0 sudo[122860]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:14 compute-0 sudo[123012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airhrlaccjoiekobxqwuqhgxvahriwgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.493686-206-277907162174421/AnsiballZ_stat.py'
Dec 06 09:47:14 compute-0 sudo[123012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:14 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:14 compute-0 python3.9[123014]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:15 compute-0 sudo[123012]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:15 compute-0 sudo[123092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskjjjliregvrjeuyriziccslhupbalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.493686-206-277907162174421/AnsiballZ_file.py'
Dec 06 09:47:15 compute-0 sudo[123092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:47:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:15 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:47:15 compute-0 python3.9[123094]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:15 compute-0 sudo[123092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 06 09:47:16 compute-0 sudo[123244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkczbzupdpemqxxqixaujksvsbemxdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014435.7742493-275-254799123666186/AnsiballZ_file.py'
Dec 06 09:47:16 compute-0 sudo[123244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094716 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:47:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:16 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:16 compute-0 python3.9[123246]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:16 compute-0 ceph-mon[74327]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 06 09:47:16 compute-0 sudo[123244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:16 compute-0 sudo[123396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxqaluybcfbwukbljqqttvmqkonsttgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014436.5623183-299-252886286136796/AnsiballZ_stat.py'
Dec 06 09:47:16 compute-0 sudo[123396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:16 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:16.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:47:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:47:17 compute-0 python3.9[123398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:17 compute-0 sudo[123396]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:17 compute-0 ceph-mon[74327]: pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 06 09:47:17 compute-0 sudo[123476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbufmsajhyljbcjnjclprmtfmuxqssiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014436.5623183-299-252886286136796/AnsiballZ_file.py'
Dec 06 09:47:17 compute-0 sudo[123476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:17 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:17.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:17 compute-0 python3.9[123478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:17 compute-0 sudo[123476]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 09:47:18 compute-0 sudo[123628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmlejascxakotiulfrdexfyjamrsodfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014437.845832-335-217924703665269/AnsiballZ_stat.py'
Dec 06 09:47:18 compute-0 sudo[123628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:18 compute-0 python3.9[123630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:18 compute-0 sudo[123628]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:47:18 compute-0 ceph-mon[74327]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 09:47:18 compute-0 sudo[123706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vibioyusxumcufswmdavwnpgxjouiuxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014437.845832-335-217924703665269/AnsiballZ_file.py'
Dec 06 09:47:18 compute-0 sudo[123706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:18 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:19 compute-0 python3.9[123708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:19 compute-0 sudo[123706]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:19.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:19 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:19.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:19 compute-0 sudo[123860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzreetozorlauuyavjiskqfdpyuhpspq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014439.2747464-371-109526540391136/AnsiballZ_systemd.py'
Dec 06 09:47:19 compute-0 sudo[123860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:20 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:20 compute-0 python3.9[123862]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:47:20 compute-0 systemd[1]: Reloading.
Dec 06 09:47:20 compute-0 systemd-sysv-generator[123894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:20 compute-0 systemd-rc-local-generator[123891]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:20 compute-0 sudo[123860]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:47:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:47:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:20 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094721 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:47:21 compute-0 sudo[124052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqecjqduwnjhdaipwihqgbvjjgjoogtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014440.9268985-395-214649302198340/AnsiballZ_stat.py'
Dec 06 09:47:21 compute-0 sudo[124052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:21.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:21 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:21 compute-0 ceph-mon[74327]: pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:21 compute-0 python3.9[124054]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:21.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:21 compute-0 sudo[124052]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:21 compute-0 sudo[124130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutscsyvswladlwstftmqzgwrcdnztvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014440.9268985-395-214649302198340/AnsiballZ_file.py'
Dec 06 09:47:21 compute-0 sudo[124130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:22 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:22 compute-0 python3.9[124132]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:22 compute-0 sudo[124130]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:22 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:23 compute-0 sudo[124283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwpjldkcuwyocvmqhuqnibuoswhdcuax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014442.7661965-431-68857288424886/AnsiballZ_stat.py'
Dec 06 09:47:23 compute-0 sudo[124283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:23 compute-0 python3.9[124285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:23.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:23 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:23 compute-0 sudo[124283]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:23 compute-0 ceph-mon[74327]: pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:23 compute-0 sudo[124362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fucdigrnynsorzcfhbmtylcwwyeijrgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014442.7661965-431-68857288424886/AnsiballZ_file.py'
Dec 06 09:47:23 compute-0 sudo[124362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:47:23
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', '.nfs', 'volumes', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.mgr']
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:23 compute-0 python3.9[124364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:47:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:47:23 compute-0 sudo[124362]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:47:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:47:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:24 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d00016c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:47:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:47:24 compute-0 sudo[124514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnqtkonrgrgvzvsvgzdwsfmjflyyexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014444.5129795-467-784259605346/AnsiballZ_systemd.py'
Dec 06 09:47:24 compute-0 sudo[124514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:24 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:25 compute-0 python3.9[124516]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:47:25 compute-0 systemd[1]: Reloading.
Dec 06 09:47:25 compute-0 systemd-rc-local-generator[124546]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:25 compute-0 systemd-sysv-generator[124549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:25 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82b8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:25 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 09:47:25 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:47:25 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:47:25 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 09:47:25 compute-0 sudo[124514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 06 09:47:25 compute-0 ceph-mon[74327]: pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:47:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:26 compute-0 sudo[124636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:47:26 compute-0 sudo[124636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:26 compute-0 sudo[124636]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:26 compute-0 kernel: ganesha.nfsd[122490]: segfault at 50 ip 00007f839032632e sp 00007f835a7fb210 error 4 in libntirpc.so.5.8[7f839030b000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 06 09:47:26 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:47:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[121171]: 06/12/2025 09:47:26 : epoch 6933fb8a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82d8002910 fd 39 proxy ignored for local
Dec 06 09:47:26 compute-0 systemd[1]: Started Process Core Dump (PID 124684/UID 0).
Dec 06 09:47:26 compute-0 python3.9[124736]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:47:26 compute-0 network[124753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:47:26 compute-0 network[124754]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:47:26 compute-0 network[124755]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:47:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:47:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:47:27 compute-0 ceph-mon[74327]: pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 06 09:47:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:27.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:47:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:47:27 compute-0 systemd-coredump[124688]: Process 121175 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f839032632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:47:27 compute-0 systemd[1]: systemd-coredump@2-124684-0.service: Deactivated successfully.
Dec 06 09:47:27 compute-0 systemd[1]: systemd-coredump@2-124684-0.service: Consumed 1.512s CPU time.
Dec 06 09:47:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 06 09:47:27 compute-0 podman[124798]: 2025-12-06 09:47:27.917091758 +0000 UTC m=+0.025247208 container died 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b803fd6ab17911ab7240c5d02c6dbaeae752811fd5893ad82ed4c49a9721f1c-merged.mount: Deactivated successfully.
Dec 06 09:47:27 compute-0 podman[124798]: 2025-12-06 09:47:27.985998265 +0000 UTC m=+0.094153705 container remove 110de08b0faf0070bf966f79c685b3e90821d04d13d1192b43b0dcfdec88a2e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:47:27 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:47:28 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:47:28 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.781s CPU time.
Dec 06 09:47:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:29 compute-0 ceph-mon[74327]: pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 06 09:47:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:29.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:29.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:47:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:47:31 compute-0 sudo[125069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haubraxpkbxsdusixixusjyimfznotat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014450.9793084-545-137607785608003/AnsiballZ_stat.py'
Dec 06 09:47:31 compute-0 sudo[125069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:31 compute-0 ceph-mon[74327]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:31 compute-0 python3.9[125071]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:31 compute-0 sudo[125069]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:31 compute-0 sudo[125147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczdusstzswsqeavhruatoyggbpfsoxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014450.9793084-545-137607785608003/AnsiballZ_file.py'
Dec 06 09:47:31 compute-0 sudo[125147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:31 compute-0 python3.9[125149]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:31 compute-0 sudo[125147]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094732 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:47:32 compute-0 sudo[125299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lykecpepoiocjmgvpstlmhzgwexahvan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014452.2066658-584-89377624123661/AnsiballZ_file.py'
Dec 06 09:47:32 compute-0 sudo[125299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:32 compute-0 python3.9[125301]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:32 compute-0 sudo[125299]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:32 compute-0 ceph-mon[74327]: pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:33 compute-0 sudo[125453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korfpfhprawzvvyftceoovdrnrmexpmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014452.9668086-608-108151115871304/AnsiballZ_stat.py'
Dec 06 09:47:33 compute-0 sudo[125453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:33 compute-0 python3.9[125455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:33.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:33 compute-0 sudo[125453]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:33 compute-0 sudo[125531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egczviwcsqaexvcvtziqddoqefpehvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014452.9668086-608-108151115871304/AnsiballZ_file.py'
Dec 06 09:47:33 compute-0 sudo[125531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:33 compute-0 python3.9[125533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:34 compute-0 sudo[125531]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:34 compute-0 ceph-mon[74327]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:47:35 compute-0 sudo[125684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckmnqbviafyrniivouourcimvyfwuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014454.6549957-653-133680147353728/AnsiballZ_timezone.py'
Dec 06 09:47:35 compute-0 sudo[125684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:35 compute-0 python3.9[125686]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 09:47:35 compute-0 systemd[1]: Starting Time & Date Service...
Dec 06 09:47:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:35 compute-0 systemd[1]: Started Time & Date Service.
Dec 06 09:47:35 compute-0 sudo[125684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:35.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:36 compute-0 sudo[125841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizrzxzergnuprabbnjrnrzirsweozds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014455.7953508-680-276778771192582/AnsiballZ_file.py'
Dec 06 09:47:36 compute-0 sudo[125841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:36 compute-0 python3.9[125843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:36 compute-0 sudo[125841]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:36 compute-0 sudo[125993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypunemcdccagvfoevrlivhjhtabtncda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014456.5005617-704-3351438094211/AnsiballZ_stat.py'
Dec 06 09:47:36 compute-0 sudo[125993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:36 compute-0 ceph-mon[74327]: pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:36.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:47:36 compute-0 python3.9[125995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:37 compute-0 sudo[125993]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:37 compute-0 sudo[126073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vufwuihmwjdinhnuhjprrtevabjewmwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014456.5005617-704-3351438094211/AnsiballZ_file.py'
Dec 06 09:47:37 compute-0 sudo[126073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:37 compute-0 python3.9[126075]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:37 compute-0 sudo[126073]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 09:47:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 09:47:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:38 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 3.
Dec 06 09:47:38 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:47:38 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.781s CPU time.
Dec 06 09:47:38 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:47:38 compute-0 sudo[126237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqbtjrizdhcjcgrmenobnmmqxdndoeph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014457.7232351-740-245883642532956/AnsiballZ_stat.py'
Dec 06 09:47:38 compute-0 sudo[126237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:38 compute-0 python3.9[126245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:38 compute-0 podman[126276]: 2025-12-06 09:47:38.4645514 +0000 UTC m=+0.107152256 container create 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:47:38 compute-0 podman[126276]: 2025-12-06 09:47:38.381809263 +0000 UTC m=+0.024410179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:38 compute-0 sudo[126237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:38 compute-0 sudo[126369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnfxwbvxmczbfouocdzrcwyurvennnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014457.7232351-740-245883642532956/AnsiballZ_file.py'
Dec 06 09:47:38 compute-0 sudo[126369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:38 compute-0 podman[126276]: 2025-12-06 09:47:38.889010815 +0000 UTC m=+0.531611761 container init 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:47:38 compute-0 podman[126276]: 2025-12-06 09:47:38.901351582 +0000 UTC m=+0.543952458 container start 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:47:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:47:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:47:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:47:38 compute-0 bash[126276]: 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86
Dec 06 09:47:38 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:47:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:39 compute-0 python3.9[126371]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vnmm999w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:39 compute-0 sudo[126369]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:47:39 compute-0 ceph-mon[74327]: pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:47:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:47:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:39 compute-0 sudo[126562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycavmmpfuydndeoycorbkvgpoynymqxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014459.255938-776-192942471539721/AnsiballZ_stat.py'
Dec 06 09:47:39 compute-0 sudo[126562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:39.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:39 compute-0 python3.9[126564]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:39 compute-0 sudo[126562]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:40 compute-0 sudo[126640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpctgzatiwrqpotsnlncxvrdwnvabyem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014459.255938-776-192942471539721/AnsiballZ_file.py'
Dec 06 09:47:40 compute-0 sudo[126640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:40 compute-0 python3.9[126642]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:40 compute-0 sudo[126640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:40 compute-0 sudo[126667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:47:40 compute-0 sudo[126667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:40 compute-0 sudo[126667]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:40 compute-0 sudo[126692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:47:40 compute-0 sudo[126692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:40 compute-0 ceph-mon[74327]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:47:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:47:40 compute-0 sudo[126864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfecinawxkljmgekrxjqotjsovhnnfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014460.4846559-815-78662681496362/AnsiballZ_command.py'
Dec 06 09:47:40 compute-0 sudo[126864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:40 compute-0 sudo[126692]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:47:41 compute-0 python3.9[126874]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:47:41 compute-0 sudo[126864]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:47:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:47:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:47:41 compute-0 sudo[126883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:47:41 compute-0 sudo[126883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:41 compute-0 sudo[126883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:41 compute-0 sudo[126929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:47:41 compute-0 sudo[126929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:41.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:41.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:41 compute-0 sudo[127133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxosaidxvovuqfotcntjjcwxinzwehj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014461.3038597-839-96715978999895/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 09:47:41 compute-0 sudo[127133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:41 compute-0 podman[127087]: 2025-12-06 09:47:41.697354143 +0000 UTC m=+0.031696572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:41 compute-0 podman[127087]: 2025-12-06 09:47:41.991707272 +0000 UTC m=+0.326049671 container create 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:47:42 compute-0 python3[127135]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:47:42 compute-0 sudo[127133]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:47:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:47:42 compute-0 systemd[1]: Started libpod-conmon-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope.
Dec 06 09:47:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:42 compute-0 sudo[127290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbravnndhieihrisbrzjkmhxfhgzeeqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014462.2704978-863-82552929553635/AnsiballZ_stat.py'
Dec 06 09:47:42 compute-0 sudo[127290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:42 compute-0 python3.9[127292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:42 compute-0 sudo[127290]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:42 compute-0 podman[127087]: 2025-12-06 09:47:42.843933677 +0000 UTC m=+1.178276096 container init 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:47:42 compute-0 podman[127087]: 2025-12-06 09:47:42.852902755 +0000 UTC m=+1.187245154 container start 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:47:42 compute-0 tender_volhard[127162]: 167 167
Dec 06 09:47:42 compute-0 systemd[1]: libpod-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope: Deactivated successfully.
Dec 06 09:47:42 compute-0 podman[127087]: 2025-12-06 09:47:42.962415743 +0000 UTC m=+1.296758142 container attach 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:47:42 compute-0 podman[127087]: 2025-12-06 09:47:42.962880666 +0000 UTC m=+1.297223065 container died 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 09:47:43 compute-0 sudo[127381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldiajzwbthbplippyxqupqfwhfveotp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014462.2704978-863-82552929553635/AnsiballZ_file.py'
Dec 06 09:47:43 compute-0 sudo[127381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:43 compute-0 python3.9[127383]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:43 compute-0 sudo[127381]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7463cabeea252a08593052d1e73817ccc6d0aa370dbaee0066e4e051a28fb62e-merged.mount: Deactivated successfully.
Dec 06 09:47:43 compute-0 ceph-mon[74327]: pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:47:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:43.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:43.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:43 compute-0 podman[127087]: 2025-12-06 09:47:43.598834786 +0000 UTC m=+1.933177206 container remove 6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:47:43 compute-0 systemd[1]: libpod-conmon-6b5773183db5d737730fb3d9d0bda46a907b3b3ec1930a948f0d0c9316b35d13.scope: Deactivated successfully.
Dec 06 09:47:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:47:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2525 writes, 11K keys, 2524 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2525 writes, 2524 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2525 writes, 11K keys, 2524 commit groups, 1.0 writes per commit group, ingest: 23.56 MB, 0.04 MB/s
                                           Interval WAL: 2525 writes, 2524 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.5      0.24              0.04         4    0.061       0      0       0.0       0.0
                                             L6      1/0   14.11 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     88.0     78.1      0.50              0.12         3    0.165     12K   1351       0.0       0.0
                                            Sum      1/0   14.11 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     59.1     78.2      0.74              0.15         7    0.105     12K   1351       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     59.7     78.9      0.73              0.15         6    0.122     12K   1351       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     88.0     78.1      0.50              0.12         3    0.165     12K   1351       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     80.6      0.24              0.04         3    0.078       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.019
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 1.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(72,1006.20 KB,0.32323%) FilterBlock(8,43.55 KB,0.0139889%) IndexBlock(8,94.20 KB,0.0302616%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 09:47:43 compute-0 podman[127496]: 2025-12-06 09:47:43.76349623 +0000 UTC m=+0.032453453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:43 compute-0 sudo[127560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjvqlpkljgbfzuziljhyoeicqvhfazx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014463.4972394-899-235247786141819/AnsiballZ_stat.py'
Dec 06 09:47:43 compute-0 sudo[127560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:47:43 compute-0 podman[127496]: 2025-12-06 09:47:43.892125507 +0000 UTC m=+0.161082690 container create b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:47:44 compute-0 python3.9[127562]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:44 compute-0 sudo[127560]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:44 compute-0 systemd[1]: Started libpod-conmon-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope.
Dec 06 09:47:44 compute-0 sudo[127638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkpvyudwmyofupsecvczewiiripndpnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014463.4972394-899-235247786141819/AnsiballZ_file.py'
Dec 06 09:47:44 compute-0 sudo[127638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:44 compute-0 python3.9[127644]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:44 compute-0 sudo[127638]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:45 compute-0 sudo[127796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfljmbzrdlbkltvgkqcxlsphiigsnlkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014464.8508654-935-214698122359203/AnsiballZ_stat.py'
Dec 06 09:47:45 compute-0 sudo[127796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:45.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:45 compute-0 podman[127496]: 2025-12-06 09:47:45.762873853 +0000 UTC m=+2.031831086 container init b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:47:45 compute-0 podman[127496]: 2025-12-06 09:47:45.775117388 +0000 UTC m=+2.044074571 container start b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:47:45 compute-0 python3.9[127798]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:45 compute-0 sudo[127796]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:47:46 compute-0 ceph-mon[74327]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:47:46 compute-0 nifty_villani[127642]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:47:46 compute-0 nifty_villani[127642]: --> All data devices are unavailable
Dec 06 09:47:46 compute-0 podman[127496]: 2025-12-06 09:47:46.146533153 +0000 UTC m=+2.415490336 container attach b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 09:47:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:47:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:47:46 compute-0 podman[127496]: 2025-12-06 09:47:46.180652229 +0000 UTC m=+2.449609422 container died b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 09:47:46 compute-0 systemd[1]: libpod-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope: Deactivated successfully.
Dec 06 09:47:46 compute-0 sudo[127902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxlcwypfuefpocnjrimnjqvyrvdqxys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014464.8508654-935-214698122359203/AnsiballZ_file.py'
Dec 06 09:47:46 compute-0 sudo[127902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:46 compute-0 sudo[127895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:47:46 compute-0 sudo[127895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:46 compute-0 sudo[127895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-628d9da1577c2dfeee664934590a86b9e248a84d9d43466c135ddf9905109fb1-merged.mount: Deactivated successfully.
Dec 06 09:47:46 compute-0 podman[127496]: 2025-12-06 09:47:46.437723797 +0000 UTC m=+2.706681000 container remove b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:47:46 compute-0 python3.9[127923]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:46 compute-0 systemd[1]: libpod-conmon-b8f93afd75933ff0f4f39edf625a638acdd1ab8a1df15f664ecba529b480ea9f.scope: Deactivated successfully.
Dec 06 09:47:46 compute-0 sudo[127902]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:46 compute-0 sudo[126929]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:46 compute-0 sudo[127933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:47:46 compute-0 sudo[127933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:46 compute-0 sudo[127933]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:46 compute-0 sudo[127976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:47:46 compute-0 sudo[127976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:46 compute-0 sudo[128165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syohurplujxkwlkryypoijooeikijkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014466.6445897-971-31962373359714/AnsiballZ_stat.py'
Dec 06 09:47:46 compute-0 sudo[128165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:47.015802511 +0000 UTC m=+0.048467518 container create 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:46.991364841 +0000 UTC m=+0.024029848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:47 compute-0 python3.9[128167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:47 compute-0 sudo[128165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:47 compute-0 ceph-mon[74327]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:47:47 compute-0 systemd[1]: Started libpod-conmon-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope.
Dec 06 09:47:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:47 compute-0 sudo[128264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpwkubknbiwriambxzdwjksyvgrztwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014466.6445897-971-31962373359714/AnsiballZ_file.py'
Dec 06 09:47:47 compute-0 sudo[128264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:47 compute-0 sshd-session[71233]: Received disconnect from 38.102.83.98 port 56388:11: disconnected by user
Dec 06 09:47:47 compute-0 sshd-session[71233]: Disconnected from user zuul 38.102.83.98 port 56388
Dec 06 09:47:47 compute-0 sshd-session[71230]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:47:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:47 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 06 09:47:47 compute-0 systemd[1]: session-18.scope: Consumed 1min 46.097s CPU time.
Dec 06 09:47:47 compute-0 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Dec 06 09:47:47 compute-0 systemd-logind[795]: Removed session 18.
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:47.461761395 +0000 UTC m=+0.494426442 container init 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:47.472232604 +0000 UTC m=+0.504897591 container start 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:47.475765727 +0000 UTC m=+0.508430784 container attach 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:47:47 compute-0 relaxed_bartik[128196]: 167 167
Dec 06 09:47:47 compute-0 systemd[1]: libpod-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope: Deactivated successfully.
Dec 06 09:47:47 compute-0 conmon[128196]: conmon 466d1754445ce036bc9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope/container/memory.events
Dec 06 09:47:47 compute-0 podman[128168]: 2025-12-06 09:47:47.484593591 +0000 UTC m=+0.517258578 container died 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:47:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:47.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:47 compute-0 python3.9[128266]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:47 compute-0 sudo[128264]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a801b94a3e271abf75f68e437b5bc99cc416f1baeabcc0a5cdc1d36bd508898b-merged.mount: Deactivated successfully.
Dec 06 09:47:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:48 compute-0 podman[128168]: 2025-12-06 09:47:48.035219956 +0000 UTC m=+1.067884943 container remove 466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:47:48 compute-0 systemd[1]: libpod-conmon-466d1754445ce036bc9c300f65139af9b39dd0d66df66768e736c9d3f7cbfcdf.scope: Deactivated successfully.
Dec 06 09:47:48 compute-0 sudo[128451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcrrjqwbltwakvkqxjkmctpkinvqgmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014467.9237099-1007-154876715389620/AnsiballZ_stat.py'
Dec 06 09:47:48 compute-0 sudo[128451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:48 compute-0 podman[128387]: 2025-12-06 09:47:48.20482113 +0000 UTC m=+0.043426763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:48 compute-0 python3.9[128453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:48 compute-0 sudo[128451]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:48 compute-0 sudo[128529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdzkqbcaitqthpdctjeyiljcpndwxdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014467.9237099-1007-154876715389620/AnsiballZ_file.py'
Dec 06 09:47:48 compute-0 sudo[128529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:48 compute-0 podman[128387]: 2025-12-06 09:47:48.824657424 +0000 UTC m=+0.663263067 container create 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 09:47:48 compute-0 ceph-mon[74327]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:48 compute-0 systemd[1]: Started libpod-conmon-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope.
Dec 06 09:47:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:48 compute-0 podman[128387]: 2025-12-06 09:47:48.950213568 +0000 UTC m=+0.788819211 container init 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:47:48 compute-0 podman[128387]: 2025-12-06 09:47:48.969011167 +0000 UTC m=+0.807616780 container start 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:47:48 compute-0 podman[128387]: 2025-12-06 09:47:48.973635751 +0000 UTC m=+0.812241384 container attach 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:47:49 compute-0 python3.9[128531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:49 compute-0 sudo[128529]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]: {
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:     "1": [
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:         {
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "devices": [
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "/dev/loop3"
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             ],
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "lv_name": "ceph_lv0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "lv_size": "21470642176",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "name": "ceph_lv0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "tags": {
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.cluster_name": "ceph",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.crush_device_class": "",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.encrypted": "0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.osd_id": "1",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.type": "block",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.vdo": "0",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:                 "ceph.with_tpm": "0"
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             },
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "type": "block",
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:             "vg_name": "ceph_vg0"
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:         }
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]:     ]
Dec 06 09:47:49 compute-0 infallible_torvalds[128534]: }
Dec 06 09:47:49 compute-0 systemd[1]: libpod-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope: Deactivated successfully.
Dec 06 09:47:49 compute-0 podman[128387]: 2025-12-06 09:47:49.306406709 +0000 UTC m=+1.145012312 container died 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:47:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.374080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469374318, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1369, "num_deletes": 250, "total_data_size": 2553782, "memory_usage": 2594920, "flush_reason": "Manual Compaction"}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469391926, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1471945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10944, "largest_seqno": 12312, "table_properties": {"data_size": 1467153, "index_size": 2188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12146, "raw_average_key_size": 20, "raw_value_size": 1456834, "raw_average_value_size": 2407, "num_data_blocks": 97, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014324, "oldest_key_time": 1765014324, "file_creation_time": 1765014469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17899 microseconds, and 5249 cpu microseconds.
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:47:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:49.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.391979) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1471945 bytes OK
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.392000) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476564) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476614) EVENT_LOG_v1 {"time_micros": 1765014469476604, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.476636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2547897, prev total WAL file size 2547897, number of live WAL files 2.
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.477675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1437KB)], [26(14MB)]
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469477749, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16265065, "oldest_snapshot_seqno": -1}
Dec 06 09:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c58fcc2e4b9bc9808daacedb4ad67c0c4189a6d59019690927c7c85f27b93d-merged.mount: Deactivated successfully.
Dec 06 09:47:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4278 keys, 14215627 bytes, temperature: kUnknown
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469686828, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14215627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14182730, "index_size": 21075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 108743, "raw_average_key_size": 25, "raw_value_size": 14100334, "raw_average_value_size": 3296, "num_data_blocks": 902, "num_entries": 4278, "num_filter_entries": 4278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.687860) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14215627 bytes
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.689938) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.8 rd, 68.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 14.1 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(20.7) write-amplify(9.7) OK, records in: 4727, records dropped: 449 output_compression: NoCompression
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.690376) EVENT_LOG_v1 {"time_micros": 1765014469690165, "job": 10, "event": "compaction_finished", "compaction_time_micros": 209146, "compaction_time_cpu_micros": 28449, "output_level": 6, "num_output_files": 1, "total_output_size": 14215627, "num_input_records": 4727, "num_output_records": 4278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469692463, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014469696756, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.477579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:47:49.696959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:47:49 compute-0 sudo[128705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxdnvgppvadwpvzhvbjnqqxfvmchesf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014469.456179-1046-83473325430783/AnsiballZ_command.py'
Dec 06 09:47:49 compute-0 sudo[128705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:49 compute-0 podman[128387]: 2025-12-06 09:47:49.815944902 +0000 UTC m=+1.654550535 container remove 7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 06 09:47:49 compute-0 systemd[1]: libpod-conmon-7eb74bb044fd6546c1c6df2b3e6fd6997ff486b10359bdc918bc4cc6831dd3bb.scope: Deactivated successfully.
Dec 06 09:47:49 compute-0 sudo[127976]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:49 compute-0 sudo[128708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:47:49 compute-0 sudo[128708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:49 compute-0 sudo[128708]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:49 compute-0 sudo[128733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:47:49 compute-0 sudo[128733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:50 compute-0 python3.9[128707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:47:50 compute-0 sudo[128705]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.332615005 +0000 UTC m=+0.049064465 container create 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:47:50 compute-0 systemd[1]: Started libpod-conmon-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope.
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.304925819 +0000 UTC m=+0.021375289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.424635409 +0000 UTC m=+0.141084879 container init 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.433561126 +0000 UTC m=+0.150010586 container start 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.437217442 +0000 UTC m=+0.153667293 container attach 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:47:50 compute-0 quizzical_matsumoto[128894]: 167 167
Dec 06 09:47:50 compute-0 systemd[1]: libpod-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope: Deactivated successfully.
Dec 06 09:47:50 compute-0 conmon[128894]: conmon 8387ca743f1ee6ef0789 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope/container/memory.events
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.440923082 +0000 UTC m=+0.157372532 container died 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fb1eef263ac1a726038f5573905885962a4783c6a5f65d6d1908503c2c51dd-merged.mount: Deactivated successfully.
Dec 06 09:47:50 compute-0 podman[128877]: 2025-12-06 09:47:50.485612649 +0000 UTC m=+0.202062109 container remove 8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:47:50 compute-0 systemd[1]: libpod-conmon-8387ca743f1ee6ef07897fc7d66ecc08c6fa9375c1f9152c73f54b95d55816b6.scope: Deactivated successfully.
Dec 06 09:47:50 compute-0 sudo[129003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgyepnkmxtmmcguilduxzkwbbqexxpeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014470.2014246-1070-40406391232510/AnsiballZ_blockinfile.py'
Dec 06 09:47:50 compute-0 podman[128971]: 2025-12-06 09:47:50.678544792 +0000 UTC m=+0.038102723 container create 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:47:50 compute-0 sudo[129003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:50 compute-0 systemd[1]: Started libpod-conmon-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope.
Dec 06 09:47:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:50 compute-0 podman[128971]: 2025-12-06 09:47:50.660823422 +0000 UTC m=+0.020381383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:47:50 compute-0 podman[128971]: 2025-12-06 09:47:50.764460854 +0000 UTC m=+0.124018795 container init 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:47:50 compute-0 podman[128971]: 2025-12-06 09:47:50.774926902 +0000 UTC m=+0.134484833 container start 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Dec 06 09:47:50 compute-0 podman[128971]: 2025-12-06 09:47:50.777803249 +0000 UTC m=+0.137361180 container attach 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 09:47:50 compute-0 python3.9[129007]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:50 compute-0 sudo[129003]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:50 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:47:50 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:47:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:47:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:47:51 compute-0 ceph-mon[74327]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:51 compute-0 lvm[129210]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:47:51 compute-0 lvm[129210]: VG ceph_vg0 finished
Dec 06 09:47:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:51 compute-0 sudo[129240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fteyfwbgpmqvczrywclhhqzopiefmalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014471.2215865-1097-279889235590498/AnsiballZ_file.py'
Dec 06 09:47:51 compute-0 sudo[129240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:51 compute-0 bold_mestorf[129010]: {}
Dec 06 09:47:51 compute-0 systemd[1]: libpod-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Deactivated successfully.
Dec 06 09:47:51 compute-0 podman[128971]: 2025-12-06 09:47:51.515974424 +0000 UTC m=+0.875532355 container died 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:47:51 compute-0 systemd[1]: libpod-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Consumed 1.107s CPU time.
Dec 06 09:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7947471b611962366e894159bd819bdc6a97f17fc343d785926d6b0ef2a943d-merged.mount: Deactivated successfully.
Dec 06 09:47:51 compute-0 podman[128971]: 2025-12-06 09:47:51.562595543 +0000 UTC m=+0.922153474 container remove 5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mestorf, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 09:47:51 compute-0 systemd[1]: libpod-conmon-5db534dbb78b3cef93a6e34d74730b1fb327fbdc1ecb736d1895d3be46343578.scope: Deactivated successfully.
Dec 06 09:47:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:51 compute-0 sudo[128733]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:47:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:47:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:51 compute-0 sudo[129257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:47:51 compute-0 sudo[129257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:47:51 compute-0 sudo[129257]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:51 compute-0 python3.9[129242]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:51 compute-0 sudo[129240]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:52 compute-0 sudo[129431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcoxqsybkegxiwgtsrbbnyfegssoggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014471.8608623-1097-110053555636319/AnsiballZ_file.py'
Dec 06 09:47:52 compute-0 sudo[129431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:47:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd034000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:52 compute-0 python3.9[129433]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:52 compute-0 sudo[129431]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:47:52 compute-0 ceph-mon[74327]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:47:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:53 compute-0 sudo[129599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlaoctfvvqzdzernrvodczogmhbmacgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014472.662875-1142-3589613507737/AnsiballZ_mount.py'
Dec 06 09:47:53 compute-0 sudo[129599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:53 compute-0 python3.9[129601]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:47:53 compute-0 sudo[129599]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:53.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:53 compute-0 sudo[129752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iibzyvoyywclvnyzkczmmpcpkmehmtyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014473.4780777-1142-261841270293049/AnsiballZ_mount.py'
Dec 06 09:47:53 compute-0 sudo[129752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:47:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:47:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:53 compute-0 python3.9[129754]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:47:53 compute-0 sudo[129752]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:47:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:47:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:47:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094754 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:47:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:54 compute-0 sshd-session[121668]: Connection closed by 192.168.122.30 port 52306
Dec 06 09:47:54 compute-0 sshd-session[121664]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:47:54 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 06 09:47:54 compute-0 systemd[1]: session-44.scope: Consumed 32.293s CPU time.
Dec 06 09:47:54 compute-0 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Dec 06 09:47:54 compute-0 systemd-logind[795]: Removed session 44.
Dec 06 09:47:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:55 compute-0 ceph-mon[74327]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:47:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:55.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:47:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:56.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:47:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:47:56.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:47:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:57 compute-0 ceph-mon[74327]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:47:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:47:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:47:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:47:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:47:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:47:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:47:59.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:47:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:47:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:47:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:47:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:47:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:00 compute-0 sshd-session[129785]: Accepted publickey for zuul from 192.168.122.30 port 54840 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:48:00 compute-0 systemd-logind[795]: New session 45 of user zuul.
Dec 06 09:48:00 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 06 09:48:00 compute-0 sshd-session[129785]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:48:00 compute-0 ceph-mon[74327]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:48:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:48:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:48:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:01 compute-0 sudo[129940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trvbdnoxpjdkeklcinalqvvxsltiqtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014480.7719038-18-68335392881731/AnsiballZ_tempfile.py'
Dec 06 09:48:01 compute-0 sudo[129940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:01.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:01 compute-0 python3.9[129942]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 09:48:01 compute-0 sudo[129940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:01 compute-0 ceph-mon[74327]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:02 compute-0 sudo[130092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbezyxrukpqioqmnqvpbroknaaszdkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014481.7950284-54-163052446802132/AnsiballZ_stat.py'
Dec 06 09:48:02 compute-0 sudo[130092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:02 compute-0 python3.9[130094]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:48:02 compute-0 sudo[130092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:02 compute-0 ceph-mon[74327]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:03 compute-0 sudo[130247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeljfskyeuahrnksfunqrskhfwzuapyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014482.7041426-78-196233600443990/AnsiballZ_slurp.py'
Dec 06 09:48:03 compute-0 sudo[130247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:03 compute-0 python3.9[130249]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 06 09:48:03 compute-0 sudo[130247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:03 compute-0 sudo[130400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmxmcvgopczsueyitvnthdzkiwbforee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014483.4988-102-145352204778336/AnsiballZ_stat.py'
Dec 06 09:48:03 compute-0 sudo[130400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:03 compute-0 python3.9[130402]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hbyixhmr follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:03 compute-0 sudo[130400]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:04 compute-0 sudo[130525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkborryadhqudqfqaupifzjfubwgvnga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014483.4988-102-145352204778336/AnsiballZ_copy.py'
Dec 06 09:48:04 compute-0 sudo[130525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:04 compute-0 python3.9[130527]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hbyixhmr mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014483.4988-102-145352204778336/.source.hbyixhmr _original_basename=.8ex5zv4n follow=False checksum=741dc69011fb61b699872c865e152b9968457717 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:04 compute-0 sudo[130525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:04 compute-0 ceph-mon[74327]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:48:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:05 compute-0 sudo[130679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvrnbwbinbcimblhvhezojukftudqmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014484.7896683-147-160896669694665/AnsiballZ_setup.py'
Dec 06 09:48:05 compute-0 sudo[130679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:05 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 09:48:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:05 compute-0 python3.9[130681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:48:05 compute-0 sudo[130679]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:06 compute-0 sudo[130781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:48:06 compute-0 sudo[130781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:06 compute-0 sudo[130781]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:06 compute-0 sudo[130858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqkdatexwvbosraodwtibhvpcutlgvgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014486.1200032-172-77060414229835/AnsiballZ_blockinfile.py'
Dec 06 09:48:06 compute-0 sudo[130858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:06 compute-0 python3.9[130860]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDneZurSARwLaZA1xEymzXlvVAPvP8u0PCrqXuMYD5ewImDDChRITnk4XHKT/DUfrSJf9/7oJsddEbLRjhCtedqrMZsCkWz1BxtCmPBuvz2LfFhEn27TjqYLctOVGigQGsj6ILvPOzzLiapd93yApWDmH6P0un/ltmdM0iZLygNpzG3HLF8STBXzlo/8slci69Em7XppcrOpl1TS7DaVlpNcRQvo9pFuIrbMD9g0DOdMwk5YCH6g7OzGWqq0gt0YUOztmsqxWHKav3E0SXAD/vkgRc/1ZCNGFNSvf0dIgimCF3xlNWrppnvNgQ1BRqiQ7RArlOp1bVg0Ugdce6f4TIrq36Ois2U5+/myF5WQ7l9hRMRvoP64hSSsRAIDobTI/zMStUP3iZPFngxDxwQtpydHfFGywBL9811c42U7JsGxE8890uOIDk/oOkyhSH6KHQCPFjmKBJ98nT01lgnXyFSNOqds6QOYBasUWNFWd2wS7YpTheGlVVM8bk/gB4K2L0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMkn8zp09tRuEaH/bUoP0rYj+dziM1KcqMKxOgM9K1U
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrMdvJJYP0cflC7RDFsxwr66nSp9R7QU726CAfJcKLw6vHh8Z9Lw5wLH0kiaSpsb6SAPffloplHEDiwTOkghOc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAiB67qk/R3IfGpcAH1Ojopc8KX94De+Kxs31cKQLD04X+4QRXPRdMxU85LOhN58eKoHaBi8cgqk7+dvRypGD5vbtbRN9r0VN7tGwiSQTlVFbEuhn0AEbnRwNAMWEEMHO9kEjufP4N2zEEhtQBXy9oO2tMX3+BX4Z3YZZMQyZUgohdBHp2VCul9VdRuo0oHSr8HHm0nN61dMjalnThmgkGAu5hG8qhkWT4i9hroSKBsR5kVBUFTqdXekYkVy4YIYfM2lBXiMOFHtvr1a+KOyIfgWMb7GBPW7oKqtzCfVgSbGaUhSvGzs1OWt3U/PjjapIlmDnwD5ukzVxWV5ldh0vA48tXh5R1wqAoN5/Y/RiAKaY2kd/fvtkhvVDGZluXOz5jJ02IFHm+v4dP3Ig8YOuS5BEkWFuJHkblW0t/+4siTHWwmGEuvUI6y8Gb2pGcBKsWCJtLePYzT09IAmrjwO0jAgbWy0nvCZ+SKlbBBrXP6OgNgMkA+GH9iGOl6FOuRok=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYNj3LmNvR0emoQHuuy9NKXPivs/dznunVy8GExnJl8
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJhKmGSvg8FMw16qKPzk6Pyj+OHkN3bmk20mts1PdCRcNRnn9sT1DgI6U8Aze1tjGPujT4eDL+Y9r/hsrfM4qDc=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvqYC0W0zPSX/plyJvm0q1VGDScYTNlcCdllukOe81JRfU3GhVusPZOX0xRSaLP/lmXtfqWcbBRCkLsmFrAo2EHn1CMqMr5WkhY4+rgApF+MGLDOUo57tlKZLPIwdL0SSY/Qv8lBfrqr7LUDZ7fTTTbqTzim/bncxg/u0KxSWBdvjfmYi13SwO65wDkFqSVYa3h8DNij6cRRjQ0fJuJ9Da860hmMnqo9GJMU6dq3zMXXn3YfuF4E4M0UQdlWmVW4EwBTzsfA1XYbSpW7VdRJw6esB4vZ9/Succj+XZiANoDqL9gXSEjNXVVWVbL/7aGJJF9LLQ3VVxmHdbYs1NcTI6Yy9d61zDJHnK/nlYHMhmAHxiDsZEpv0xF72LLzaI86xxvnbx4eUpnyW6LnKiUCYUAUrWIMpLiIbWUxeIoYmj9rqLhwlo5kCy7WdCYYEMTtGI53oIyU0EbXf/r4WAuzmqpVRPyc2Sd5tYD4aXh1JZLUcZy+NLR0Y4SA8RflKFcs=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDJYF6pUvFgGUbY2QEOHAq7ZEhRQJUqPTVPOuTyb476
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPJ19afQPeSMtr3O9L1fe5+bNzTAsOOCA5fLihUdryDYc29KKD+0XABHKIvqeefcCsIBjZRA//9OzCUftfvXK9A=
                                              create=True mode=0644 path=/tmp/ansible.hbyixhmr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:06 compute-0 sudo[130858]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:06 compute-0 ceph-mon[74327]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:06.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:48:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:07 compute-0 sudo[131012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxfmuqhnwkuzdegbkasvxgwlaiszawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014486.9073696-196-98281787212943/AnsiballZ_command.py'
Dec 06 09:48:07 compute-0 sudo[131012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:07 compute-0 python3.9[131014]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hbyixhmr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:48:07 compute-0 sudo[131012]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:07.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:08 compute-0 sudo[131166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edabubjuofvcnvulpbsxgipybjjfvsot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014487.7129693-220-257497922937537/AnsiballZ_file.py'
Dec 06 09:48:08 compute-0 sudo[131166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:08 compute-0 python3.9[131168]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hbyixhmr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:08 compute-0 sudo[131166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:08 compute-0 sshd-session[129788]: Connection closed by 192.168.122.30 port 54840
Dec 06 09:48:08 compute-0 sshd-session[129785]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:48:08 compute-0 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Dec 06 09:48:08 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 06 09:48:08 compute-0 systemd[1]: session-45.scope: Consumed 5.630s CPU time.
Dec 06 09:48:08 compute-0 systemd-logind[795]: Removed session 45.
Dec 06 09:48:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:48:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:09.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:10 compute-0 ceph-mon[74327]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:48:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:48:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:12 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:12 compute-0 ceph-mon[74327]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:13 compute-0 ceph-mon[74327]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:15 compute-0 ceph-mon[74327]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:16.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:48:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:17 compute-0 ceph-mon[74327]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:18 compute-0 sshd-session[131203]: Accepted publickey for zuul from 192.168.122.30 port 49006 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:48:18 compute-0 systemd-logind[795]: New session 46 of user zuul.
Dec 06 09:48:18 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 06 09:48:18 compute-0 sshd-session[131203]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:48:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:19 compute-0 ceph-mon[74327]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:19 compute-0 python3.9[131356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:48:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:19.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:20 compute-0 sudo[131512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndfrgaujuifvwnmicjlpueijlcpgbqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014499.7255783-56-15487371380126/AnsiballZ_systemd.py'
Dec 06 09:48:20 compute-0 sudo[131512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:20 compute-0 python3.9[131514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:48:20 compute-0 sudo[131512]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:48:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:48:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:21 compute-0 ceph-mon[74327]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:21 compute-0 sudo[131667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teumvqqfklrnpnrzvhxpvghizmiqaicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014500.8463693-80-20075105867659/AnsiballZ_systemd.py'
Dec 06 09:48:21 compute-0 sudo[131667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:21.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:21 compute-0 python3.9[131670]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:48:21 compute-0 sudo[131667]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:22 compute-0 sudo[131821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owdslqdwkmsvtqvdwhupapqyrhtygluf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014501.8603432-107-129732419670059/AnsiballZ_command.py'
Dec 06 09:48:22 compute-0 sudo[131821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:22 compute-0 python3.9[131823]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:48:22 compute-0 sudo[131821]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd010004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:23 compute-0 sudo[131976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbfcsyblfgmqdpeyocpjzkorrzultbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014502.9769387-131-30007804773744/AnsiballZ_stat.py'
Dec 06 09:48:23 compute-0 sudo[131976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:23.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:23 compute-0 python3.9[131978]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:48:23 compute-0 sudo[131976]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:48:23
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.nfs', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data']
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:48:23 compute-0 ceph-mon[74327]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:48:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:48:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:48:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:48:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:24 compute-0 sudo[132130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdohjpthikrdtylasbvyuiumsfbbjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014503.9443583-158-262115911214066/AnsiballZ_file.py'
Dec 06 09:48:24 compute-0 sudo[132130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:24 compute-0 python3.9[132132]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:24 compute-0 sudo[132130]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:24 compute-0 ceph-mon[74327]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:24 compute-0 sshd-session[131206]: Connection closed by 192.168.122.30 port 49006
Dec 06 09:48:24 compute-0 sshd-session[131203]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:48:24 compute-0 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Dec 06 09:48:24 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 06 09:48:24 compute-0 systemd[1]: session-46.scope: Consumed 4.387s CPU time.
Dec 06 09:48:24 compute-0 systemd-logind[795]: Removed session 46.
Dec 06 09:48:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:25.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:26 compute-0 sudo[132159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:48:26 compute-0 sudo[132159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:26 compute-0 sudo[132159]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 06 09:48:26 compute-0 ceph-mon[74327]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:48:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:48:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:26.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:48:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:27.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:28 compute-0 ceph-mon[74327]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:29.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:30 compute-0 sshd-session[132188]: Accepted publickey for zuul from 192.168.122.30 port 52080 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:48:30 compute-0 systemd-logind[795]: New session 47 of user zuul.
Dec 06 09:48:30 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 06 09:48:30 compute-0 sshd-session[132188]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:48:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:48:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:48:31 compute-0 ceph-mon[74327]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:31 compute-0 python3.9[132341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:48:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:31.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:32 compute-0 sudo[132497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrmhdxyhjfdyjedxfgyumkshrfvnsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014511.762865-62-131880734424741/AnsiballZ_setup.py'
Dec 06 09:48:32 compute-0 sudo[132497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:32 compute-0 python3.9[132499]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:48:32 compute-0 sudo[132497]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:33 compute-0 sudo[132582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvpdjljgkladrqghsdqaexkunieddhlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014511.762865-62-131880734424741/AnsiballZ_dnf.py'
Dec 06 09:48:33 compute-0 sudo[132582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:33 compute-0 ceph-mon[74327]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:33 compute-0 python3.9[132584]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:48:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:33.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:34 compute-0 sudo[132582]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:35 compute-0 ceph-mon[74327]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:35 compute-0 python3.9[132738]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:48:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:36 compute-0 python3.9[132889]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:48:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:36.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:48:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:37 compute-0 ceph-mon[74327]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:37.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:37 compute-0 python3.9[133041]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:48:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:38 compute-0 python3.9[133191]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:48:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:38 compute-0 sshd-session[132191]: Connection closed by 192.168.122.30 port 52080
Dec 06 09:48:38 compute-0 sshd-session[132188]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:48:38 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 06 09:48:38 compute-0 systemd[1]: session-47.scope: Consumed 6.060s CPU time.
Dec 06 09:48:38 compute-0 systemd-logind[795]: Session 47 logged out. Waiting for processes to exit.
Dec 06 09:48:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:48:38 compute-0 systemd-logind[795]: Removed session 47.
Dec 06 09:48:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:39 compute-0 ceph-mon[74327]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:39.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:48:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:48:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:41 compute-0 ceph-mon[74327]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:41.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:43 compute-0 ceph-mon[74327]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:43.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:44 compute-0 sshd-session[133222]: Accepted publickey for zuul from 192.168.122.30 port 42666 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:48:44 compute-0 systemd-logind[795]: New session 48 of user zuul.
Dec 06 09:48:44 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 06 09:48:44 compute-0 sshd-session[133222]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:48:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:44 compute-0 ceph-mon[74327]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:45 compute-0 python3.9[133375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:48:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:46 compute-0 ceph-mon[74327]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:46 compute-0 sudo[133504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:48:46 compute-0 sudo[133504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:46 compute-0 sudo[133504]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:46 compute-0 sudo[133558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswqieohcvzzpdojgbhhmmqtnsfoxybl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014526.2329404-111-33479110669307/AnsiballZ_file.py'
Dec 06 09:48:46 compute-0 sudo[133558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:46 compute-0 python3.9[133560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:46 compute-0 sudo[133558]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:48:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:48:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:48:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:47 compute-0 sudo[133712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whduengpzfilypjoxoqxztnicnaalzmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014527.1111138-111-3257714414899/AnsiballZ_file.py'
Dec 06 09:48:47 compute-0 sudo[133712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:47 compute-0 python3.9[133714]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:47 compute-0 sudo[133712]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:48 compute-0 sudo[133864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkoqlqtcrcckgbzgjhwirchxgidxktvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014527.7212272-155-115046506001513/AnsiballZ_stat.py'
Dec 06 09:48:48 compute-0 sudo[133864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:48 compute-0 python3.9[133866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:48 compute-0 sudo[133864]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:48 compute-0 sudo[133987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-japhqfnbznqmuwjqranbvegytoiacanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014527.7212272-155-115046506001513/AnsiballZ_copy.py'
Dec 06 09:48:48 compute-0 sudo[133987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:49 compute-0 ceph-mon[74327]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:49 compute-0 python3.9[133989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014527.7212272-155-115046506001513/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4972a5b4767763bd2b83e4da30fd5d4465a5d407 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:49 compute-0 sudo[133987]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:49 compute-0 sudo[134141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enoochbjfaznwktzgmgfjlwsudernnmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014529.2117805-155-122908161432421/AnsiballZ_stat.py'
Dec 06 09:48:49 compute-0 sudo[134141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:49 compute-0 python3.9[134143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:48:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:48:49 compute-0 sudo[134141]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:49 compute-0 sudo[134264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjlqvjtzpwnpnaocxdogeerkqxexkfwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014529.2117805-155-122908161432421/AnsiballZ_copy.py'
Dec 06 09:48:49 compute-0 sudo[134264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:50 compute-0 python3.9[134266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014529.2117805-155-122908161432421/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f805cc6455e59702aa77bd6ffe81bb9b155b0be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:50 compute-0 sudo[134264]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:50 compute-0 sudo[134416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtelqooumxznazytjbnpghmvxguhmjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014530.3228693-155-204436662716346/AnsiballZ_stat.py'
Dec 06 09:48:50 compute-0 sudo[134416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:50 compute-0 python3.9[134418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:50 compute-0 sudo[134416]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:48:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:48:51 compute-0 ceph-mon[74327]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:51 compute-0 sudo[134540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbvvebdtifwdyhwwpdvffllnhzgldue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014530.3228693-155-204436662716346/AnsiballZ_copy.py'
Dec 06 09:48:51 compute-0 sudo[134540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:51 compute-0 python3.9[134542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014530.3228693-155-204436662716346/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fc784aa4b08f164441f6f4f35eca9daa081a5501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:51 compute-0 sudo[134540]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:51.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:51 compute-0 sudo[134693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cklulgucdkahmmasysmqgucnxginrxko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014531.6004028-284-83252865298764/AnsiballZ_file.py'
Dec 06 09:48:51 compute-0 sudo[134693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:51 compute-0 sudo[134696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:51 compute-0 sudo[134696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:51 compute-0 sudo[134696]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:52 compute-0 sudo[134721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 06 09:48:52 compute-0 sudo[134721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:52 compute-0 python3.9[134695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:52 compute-0 sudo[134693]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:52 compute-0 sudo[134721]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:48:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:48:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:48:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:48:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:52 compute-0 sudo[134886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:52 compute-0 sudo[134886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:52 compute-0 sudo[134886]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:52 compute-0 sudo[134940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlqqirxytgmpyyzvkiztcxprklanxhtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014532.221007-284-251067955225692/AnsiballZ_file.py'
Dec 06 09:48:52 compute-0 sudo[134940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:52 compute-0 sudo[134943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:48:52 compute-0 sudo[134943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:52 compute-0 python3.9[134942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:52 compute-0 sudo[134940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:53 compute-0 sudo[134943]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:48:53 compute-0 sudo[135171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesezvlwmbkqpstockgtbosvgfjacebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014532.938963-331-270173198636715/AnsiballZ_stat.py'
Dec 06 09:48:53 compute-0 sudo[135171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:53 compute-0 sudo[135127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:53 compute-0 sudo[135127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:53 compute-0 sudo[135127]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:53 compute-0 sudo[135177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:48:53 compute-0 sudo[135177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:48:53 compute-0 python3.9[135174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:53 compute-0 sudo[135171]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.663363893 +0000 UTC m=+0.056433235 container create 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:48:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:53 compute-0 systemd[1]: Started libpod-conmon-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope.
Dec 06 09:48:53 compute-0 sudo[135379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukhbaralxoyardlzvzjotplnxgmjucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014532.938963-331-270173198636715/AnsiballZ_copy.py'
Dec 06 09:48:53 compute-0 sudo[135379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.636760564 +0000 UTC m=+0.029829986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.764420043 +0000 UTC m=+0.157489405 container init 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.77098247 +0000 UTC m=+0.164051812 container start 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.773812826 +0000 UTC m=+0.166882168 container attach 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:48:53 compute-0 brave_yalow[135381]: 167 167
Dec 06 09:48:53 compute-0 systemd[1]: libpod-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope: Deactivated successfully.
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.777367153 +0000 UTC m=+0.170436495 container died 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-afc5d14b32605f51a550e429c1d33fdaa9db3409f4b6cedb2293dfcc9fc80028-merged.mount: Deactivated successfully.
Dec 06 09:48:53 compute-0 podman[135312]: 2025-12-06 09:48:53.815303977 +0000 UTC m=+0.208373319 container remove 7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:48:53 compute-0 systemd[1]: libpod-conmon-7476fd7a3795ddeab497a89da41751382ba04e1cbebc6623f50adedcd94f2df9.scope: Deactivated successfully.
Dec 06 09:48:53 compute-0 python3.9[135383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014532.938963-331-270173198636715/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=37e9f8032863405665c1a6629c82ece5be598bf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:48:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:53 compute-0 sudo[135379]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:53 compute-0 podman[135407]: 2025-12-06 09:48:53.958586548 +0000 UTC m=+0.044332179 container create a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:48:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:48:53 compute-0 systemd[1]: Started libpod-conmon-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope.
Dec 06 09:48:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:53.939077751 +0000 UTC m=+0.024823432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:54.047712355 +0000 UTC m=+0.133458006 container init a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:54.059617897 +0000 UTC m=+0.145363528 container start a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:54.070706226 +0000 UTC m=+0.156451857 container attach a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:48:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:54 compute-0 sudo[135584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyutabdlknckwaddfyfibxunmbpjnyjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014534.0829535-331-11209439750338/AnsiballZ_stat.py'
Dec 06 09:48:54 compute-0 sudo[135584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:48:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:54 compute-0 blissful_jang[135447]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:48:54 compute-0 blissful_jang[135447]: --> All data devices are unavailable
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:54.411692297 +0000 UTC m=+0.497437938 container died a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:48:54 compute-0 systemd[1]: libpod-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope: Deactivated successfully.
Dec 06 09:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-052c39c1e5a36cf7eaecc5fe785e2ccdb96f99911f602693531084648588171e-merged.mount: Deactivated successfully.
Dec 06 09:48:54 compute-0 podman[135407]: 2025-12-06 09:48:54.459442897 +0000 UTC m=+0.545188528 container remove a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:48:54 compute-0 systemd[1]: libpod-conmon-a7ddca4aac259c1ff56a107c5eb813ee9e7d32a52505839037c03821580026aa.scope: Deactivated successfully.
Dec 06 09:48:54 compute-0 sudo[135177]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:54 compute-0 python3.9[135587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:54 compute-0 sudo[135604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:54 compute-0 sudo[135604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:54 compute-0 sudo[135604]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:54 compute-0 sudo[135584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:54 compute-0 sudo[135629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:48:54 compute-0 sudo[135629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:54 compute-0 sudo[135801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwsizutvghjzfivjdgzrwxejvvdvxayd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014534.0829535-331-11209439750338/AnsiballZ_copy.py'
Dec 06 09:48:54 compute-0 sudo[135801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.002985269 +0000 UTC m=+0.044745170 container create ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:48:55 compute-0 systemd[1]: Started libpod-conmon-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope.
Dec 06 09:48:55 compute-0 python3.9[135805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014534.0829535-331-11209439750338/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=72139a22070e52361b83b34c98df3f4b6e2a8fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:54.982004323 +0000 UTC m=+0.023764254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:55 compute-0 sudo[135801]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.105410026 +0000 UTC m=+0.147169957 container init ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.111708236 +0000 UTC m=+0.153468177 container start ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.115435626 +0000 UTC m=+0.157195557 container attach ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:48:55 compute-0 sweet_ritchie[135835]: 167 167
Dec 06 09:48:55 compute-0 systemd[1]: libpod-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope: Deactivated successfully.
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.116816954 +0000 UTC m=+0.158576865 container died ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e18f6b41bc202354ddee8a8299af53146a349f7275517390b1e2d227808f4a4-merged.mount: Deactivated successfully.
Dec 06 09:48:55 compute-0 podman[135818]: 2025-12-06 09:48:55.155274282 +0000 UTC m=+0.197034193 container remove ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:48:55 compute-0 systemd[1]: libpod-conmon-ec467887004f8d0d407fee863dd798cf0c13303163d35a2f32a8fe58e578c555.scope: Deactivated successfully.
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.299356735 +0000 UTC m=+0.045109050 container create 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:48:55 compute-0 systemd[1]: Started libpod-conmon-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope.
Dec 06 09:48:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.282764936 +0000 UTC m=+0.028517281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.374862274 +0000 UTC m=+0.120614609 container init 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.38062291 +0000 UTC m=+0.126375225 container start 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.384401171 +0000 UTC m=+0.130153516 container attach 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:48:55 compute-0 ceph-mon[74327]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:48:55 compute-0 sudo[136032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolipvrdbqagpxhowgcfkibokcnqbyeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014535.226689-331-14089768234595/AnsiballZ_stat.py'
Dec 06 09:48:55 compute-0 sudo[136032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:55.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:55 compute-0 python3.9[136034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:55 compute-0 sudo[136032]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:55.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:55 compute-0 eager_beaver[135977]: {
Dec 06 09:48:55 compute-0 eager_beaver[135977]:     "1": [
Dec 06 09:48:55 compute-0 eager_beaver[135977]:         {
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "devices": [
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "/dev/loop3"
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             ],
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "lv_name": "ceph_lv0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "lv_size": "21470642176",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "name": "ceph_lv0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "tags": {
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.cluster_name": "ceph",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.crush_device_class": "",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.encrypted": "0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.osd_id": "1",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.type": "block",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.vdo": "0",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:                 "ceph.with_tpm": "0"
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             },
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "type": "block",
Dec 06 09:48:55 compute-0 eager_beaver[135977]:             "vg_name": "ceph_vg0"
Dec 06 09:48:55 compute-0 eager_beaver[135977]:         }
Dec 06 09:48:55 compute-0 eager_beaver[135977]:     ]
Dec 06 09:48:55 compute-0 eager_beaver[135977]: }
Dec 06 09:48:55 compute-0 systemd[1]: libpod-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope: Deactivated successfully.
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.700030458 +0000 UTC m=+0.445782773 container died 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c14c41c9117632a778fc33806d1c232821398b0f48520b4949428e6246bb207-merged.mount: Deactivated successfully.
Dec 06 09:48:55 compute-0 podman[135917]: 2025-12-06 09:48:55.744643833 +0000 UTC m=+0.490396148 container remove 4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 09:48:55 compute-0 systemd[1]: libpod-conmon-4e01f45af4077df79c465703aebdab4edb8b817c924829f8f2903f52979f657d.scope: Deactivated successfully.
Dec 06 09:48:55 compute-0 sudo[135629]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:55 compute-0 sudo[136077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:55 compute-0 sudo[136077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:55 compute-0 sudo[136077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:55 compute-0 sudo[136124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:48:55 compute-0 sudo[136124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:55 compute-0 sudo[136221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deihzfrngstmwvcltlyjgpgfydehxjlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014535.226689-331-14089768234595/AnsiballZ_copy.py'
Dec 06 09:48:55 compute-0 sudo[136221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:56 compute-0 python3.9[136223]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014535.226689-331-14089768234595/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=66483f330c598e63a8652032707c5bbf72ed3439 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:56 compute-0 sudo[136221]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.256365806 +0000 UTC m=+0.040462085 container create 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 09:48:56 compute-0 systemd[1]: Started libpod-conmon-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope.
Dec 06 09:48:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.237731162 +0000 UTC m=+0.021827401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.339183212 +0000 UTC m=+0.123279471 container init 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.347368073 +0000 UTC m=+0.131464352 container start 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.352069881 +0000 UTC m=+0.136166120 container attach 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 09:48:56 compute-0 ecstatic_moser[136305]: 167 167
Dec 06 09:48:56 compute-0 systemd[1]: libpod-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope: Deactivated successfully.
Dec 06 09:48:56 compute-0 conmon[136305]: conmon 1d52d602eaa4a6408d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope/container/memory.events
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.354275939 +0000 UTC m=+0.138372168 container died 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-23615edcbcd145c5cace3d57e3100661321ce32f634ac25b1f6e59db6cfee4c8-merged.mount: Deactivated successfully.
Dec 06 09:48:56 compute-0 podman[136264]: 2025-12-06 09:48:56.394506537 +0000 UTC m=+0.178602776 container remove 1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_moser, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:48:56 compute-0 systemd[1]: libpod-conmon-1d52d602eaa4a6408d4cbdd621d1aea0c9161f72f9ab0b0909fcbe379d81e3fd.scope: Deactivated successfully.
Dec 06 09:48:56 compute-0 ceph-mon[74327]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:56 compute-0 podman[136386]: 2025-12-06 09:48:56.544085757 +0000 UTC m=+0.047484314 container create a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:48:56 compute-0 systemd[1]: Started libpod-conmon-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope.
Dec 06 09:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:56 compute-0 podman[136386]: 2025-12-06 09:48:56.526098241 +0000 UTC m=+0.029496818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:48:56 compute-0 podman[136386]: 2025-12-06 09:48:56.621664003 +0000 UTC m=+0.125062590 container init a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:48:56 compute-0 podman[136386]: 2025-12-06 09:48:56.63120336 +0000 UTC m=+0.134601917 container start a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:48:56 compute-0 podman[136386]: 2025-12-06 09:48:56.634632163 +0000 UTC m=+0.138030740 container attach a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:48:56 compute-0 sudo[136475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvispymwlgvyizbroyrjhojgjgqxzlrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014536.396946-459-162279043295279/AnsiballZ_file.py'
Dec 06 09:48:56 compute-0 sudo[136475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:56 compute-0 python3.9[136477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:56 compute-0 sudo[136475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:56.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:48:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:48:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:48:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:57 compute-0 sudo[136697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqyprbzouuquzeltzizwriqfkuzffwmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014537.0057886-459-160515743120325/AnsiballZ_file.py'
Dec 06 09:48:57 compute-0 lvm[136699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:48:57 compute-0 lvm[136699]: VG ceph_vg0 finished
Dec 06 09:48:57 compute-0 sudo[136697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:57 compute-0 festive_buck[136433]: {}
Dec 06 09:48:57 compute-0 systemd[1]: libpod-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Deactivated successfully.
Dec 06 09:48:57 compute-0 systemd[1]: libpod-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Consumed 1.139s CPU time.
Dec 06 09:48:57 compute-0 podman[136386]: 2025-12-06 09:48:57.348383042 +0000 UTC m=+0.851781589 container died a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-59b84789a9b10a89fffb12b1a1d05d938fe6c970a90ab42d2749b8b6cba40fbb-merged.mount: Deactivated successfully.
Dec 06 09:48:57 compute-0 podman[136386]: 2025-12-06 09:48:57.407960022 +0000 UTC m=+0.911358579 container remove a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:48:57 compute-0 systemd[1]: libpod-conmon-a429dc2aaa09a2dc91faa5f1703fa73db7477f2ccb6c4db2c992c7ffea21606a.scope: Deactivated successfully.
Dec 06 09:48:57 compute-0 python3.9[136702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:48:57 compute-0 sudo[136124]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:48:57 compute-0 sudo[136697]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:48:57 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:57 compute-0 sudo[136728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:48:57 compute-0 sudo[136728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:48:57 compute-0 sudo[136728]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:57 compute-0 sudo[136890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setuzmsjkkdzbkbmxqhxeapbofnpabai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014537.6452935-503-218300622449219/AnsiballZ_stat.py'
Dec 06 09:48:57 compute-0 sudo[136890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:58 compute-0 python3.9[136892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:58 compute-0 sudo[136890]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:58 compute-0 sudo[137013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeysyxvmlqrafdlqgpkzrmrgjimdwnqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014537.6452935-503-218300622449219/AnsiballZ_copy.py'
Dec 06 09:48:58 compute-0 sudo[137013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:58 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:58 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:48:58 compute-0 ceph-mon[74327]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:48:58 compute-0 python3.9[137015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014537.6452935-503-218300622449219/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f56768d8301ea8c395a30ac1f665faa430ee5af5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:58 compute-0 sudo[137013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:58 compute-0 sudo[137165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhcnbkuibsuvnilfxuudoposvuurryu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014538.718619-503-27978408503817/AnsiballZ_stat.py'
Dec 06 09:48:58 compute-0 sudo[137165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:59 compute-0 python3.9[137167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:48:59 compute-0 sudo[137165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:48:59 compute-0 sudo[137291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyecupyobyyurfytpegkqiwoqgamawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014538.718619-503-27978408503817/AnsiballZ_copy.py'
Dec 06 09:48:59 compute-0 sudo[137291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:48:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:48:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:48:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:48:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:48:59 compute-0 python3.9[137293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014538.718619-503-27978408503817/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=72139a22070e52361b83b34c98df3f4b6e2a8fd5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:48:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:48:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:48:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:48:59 compute-0 sudo[137291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:00 compute-0 sudo[137443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggkllevbvxcutkxntpojafkpdvzkehyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014539.7986743-503-18492507804128/AnsiballZ_stat.py'
Dec 06 09:49:00 compute-0 sudo[137443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:00 compute-0 python3.9[137445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:00 compute-0 sudo[137443]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:00 compute-0 sudo[137566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqcbuwvdkwftvejyolqwoyaagixynrgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014539.7986743-503-18492507804128/AnsiballZ_copy.py'
Dec 06 09:49:00 compute-0 sudo[137566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:49:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:49:00 compute-0 python3.9[137568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014539.7986743-503-18492507804128/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eabf096ef39cf63ff907ddd7ef692acd9da19772 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:00 compute-0 sudo[137566]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:00 compute-0 ceph-mon[74327]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:01.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:02 compute-0 sudo[137720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omxaspglyjefnmoycenbdvcfbfrgulyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014541.7425263-667-46742990515081/AnsiballZ_file.py'
Dec 06 09:49:02 compute-0 sudo[137720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:02 compute-0 python3.9[137722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:02 compute-0 sudo[137720]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:02 compute-0 sudo[137872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlyvkbbezvdclgbekljextblyqqzwnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014542.4199197-704-238500884477933/AnsiballZ_stat.py'
Dec 06 09:49:02 compute-0 sudo[137872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:02 compute-0 anacron[4456]: Job `cron.weekly' started
Dec 06 09:49:02 compute-0 anacron[4456]: Job `cron.weekly' terminated
Dec 06 09:49:02 compute-0 python3.9[137874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:02 compute-0 sudo[137872]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:03 compute-0 ceph-mon[74327]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:03 compute-0 sudo[137999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhnsxldrbuoaxinhffhdwuzzkxwvuqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014542.4199197-704-238500884477933/AnsiballZ_copy.py'
Dec 06 09:49:03 compute-0 sudo[137999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:03 compute-0 python3.9[138001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014542.4199197-704-238500884477933/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:03 compute-0 sudo[137999]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:03 compute-0 sudo[138151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmnwkjhedhrdqrtvmoctfriugicjgzes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014543.730279-753-201182971281868/AnsiballZ_file.py'
Dec 06 09:49:03 compute-0 sudo[138151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:04 compute-0 python3.9[138153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:04 compute-0 sudo[138151]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:04 compute-0 sudo[138303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jancpapqtdqyoqzomuobzlqrjxnlnypb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014544.3093698-777-77760674241667/AnsiballZ_stat.py'
Dec 06 09:49:04 compute-0 sudo[138303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:04 compute-0 python3.9[138305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:04 compute-0 sudo[138303]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:05 compute-0 ceph-mon[74327]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:05 compute-0 sudo[138427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxrdvsaityjmelizmwocyaajyfhomqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014544.3093698-777-77760674241667/AnsiballZ_copy.py'
Dec 06 09:49:05 compute-0 sudo[138427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:05 compute-0 python3.9[138429]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014544.3093698-777-77760674241667/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:05 compute-0 sudo[138427]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:05 compute-0 sudo[138580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvkevmraploenjaibxlpeliiipldewpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014545.560942-824-280717244662270/AnsiballZ_file.py'
Dec 06 09:49:05 compute-0 sudo[138580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:06 compute-0 python3.9[138582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:06 compute-0 sudo[138580]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:06 compute-0 sudo[138732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujoiatygodkkhilgqwhielzyhjfipgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014546.2245512-847-108239935421504/AnsiballZ_stat.py'
Dec 06 09:49:06 compute-0 sudo[138732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:06 compute-0 python3.9[138734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:06 compute-0 sudo[138732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:06 compute-0 sudo[138735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:49:06 compute-0 sudo[138735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:06 compute-0 sudo[138735]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:49:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:49:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:06.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:49:07 compute-0 sudo[138881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfnihcazohmgvzzihxlryqtkcnjjrgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014546.2245512-847-108239935421504/AnsiballZ_copy.py'
Dec 06 09:49:07 compute-0 sudo[138881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:07 compute-0 ceph-mon[74327]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:07 compute-0 python3.9[138883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014546.2245512-847-108239935421504/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:07 compute-0 sudo[138881]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:07.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:07 compute-0 sudo[139034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhawqpenytjbycswciotqukmsxspebs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014547.4464586-892-141245375797066/AnsiballZ_file.py'
Dec 06 09:49:07 compute-0 sudo[139034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:07 compute-0 python3.9[139036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:07 compute-0 sudo[139034]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:08 compute-0 sudo[139186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvmkgullgrssporbyjvrtaycrfqlhhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014548.1095388-916-21188724777257/AnsiballZ_stat.py'
Dec 06 09:49:08 compute-0 sudo[139186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:08 compute-0 python3.9[139188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:08 compute-0 sudo[139186]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:49:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:08 compute-0 sudo[139310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blyefhdqfozrfvqoykzfylgxabstjhox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014548.1095388-916-21188724777257/AnsiballZ_copy.py'
Dec 06 09:49:08 compute-0 sudo[139310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:09 compute-0 ceph-mon[74327]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:09 compute-0 python3.9[139312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014548.1095388-916-21188724777257/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:09 compute-0 sudo[139310]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:09 compute-0 sudo[139463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlgimtdxawlfltcezhzmqydluwixyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014549.4362032-964-176024680551381/AnsiballZ_file.py'
Dec 06 09:49:09 compute-0 sudo[139463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:09 compute-0 python3.9[139465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:09 compute-0 sudo[139463]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:10 compute-0 sudo[139615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxzloaubtihlrwynfwzohxhslonftib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014550.1217773-988-80986800049945/AnsiballZ_stat.py'
Dec 06 09:49:10 compute-0 sudo[139615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:10 compute-0 python3.9[139617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:10 compute-0 sudo[139615]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:49:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:49:10 compute-0 sudo[139738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfmtplivtvctpcpgbvtjdhuhhsqdrcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014550.1217773-988-80986800049945/AnsiballZ_copy.py'
Dec 06 09:49:10 compute-0 sudo[139738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:11 compute-0 ceph-mon[74327]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:11 compute-0 python3.9[139741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014550.1217773-988-80986800049945/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:11 compute-0 sudo[139738]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:11.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:11.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:11 compute-0 sudo[139892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpnpqsriebzcwemdzazptatxehlnbnhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014551.49087-1039-40729781034730/AnsiballZ_file.py'
Dec 06 09:49:11 compute-0 sudo[139892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:12 compute-0 python3.9[139894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:12 compute-0 sudo[139892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:12 compute-0 sudo[140044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvtemogyuccthpdjazgqlmgenatdhkxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014552.2562168-1067-18270365404215/AnsiballZ_stat.py'
Dec 06 09:49:12 compute-0 sudo[140044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:12 compute-0 python3.9[140046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:12 compute-0 sudo[140044]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:13 compute-0 ceph-mon[74327]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:13 compute-0 sudo[140168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnkppdyugzwqoeeaajjhdhlqkqhzvapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014552.2562168-1067-18270365404215/AnsiballZ_copy.py'
Dec 06 09:49:13 compute-0 sudo[140168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:13 compute-0 python3.9[140171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014552.2562168-1067-18270365404215/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=22c202a539af259b977a1afda61dbc1fe0d1039c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:13 compute-0 sudo[140168]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:14 compute-0 sshd-session[133225]: Connection closed by 192.168.122.30 port 42666
Dec 06 09:49:14 compute-0 sshd-session[133222]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:49:14 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 06 09:49:14 compute-0 systemd[1]: session-48.scope: Consumed 22.935s CPU time.
Dec 06 09:49:14 compute-0 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Dec 06 09:49:14 compute-0 systemd-logind[795]: Removed session 48.
Dec 06 09:49:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:15 compute-0 ceph-mon[74327]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.207291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556207749, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 988, "num_deletes": 251, "total_data_size": 1821410, "memory_usage": 1841248, "flush_reason": "Manual Compaction"}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556230599, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1766352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12314, "largest_seqno": 13300, "table_properties": {"data_size": 1761486, "index_size": 2454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10182, "raw_average_key_size": 19, "raw_value_size": 1751812, "raw_average_value_size": 3299, "num_data_blocks": 109, "num_entries": 531, "num_filter_entries": 531, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014469, "oldest_key_time": 1765014469, "file_creation_time": 1765014556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 23032 microseconds, and 7046 cpu microseconds.
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.230653) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1766352 bytes OK
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.230674) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232953) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232969) EVENT_LOG_v1 {"time_micros": 1765014556232964, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.232988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1816859, prev total WAL file size 1816859, number of live WAL files 2.
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.233773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1724KB)], [29(13MB)]
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556233832, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15981979, "oldest_snapshot_seqno": -1}
Dec 06 09:49:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4291 keys, 14024825 bytes, temperature: kUnknown
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556385821, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 14024825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992830, "index_size": 20173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 109807, "raw_average_key_size": 25, "raw_value_size": 13911154, "raw_average_value_size": 3241, "num_data_blocks": 852, "num_entries": 4291, "num_filter_entries": 4291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.386043) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 14024825 bytes
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.387582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.1 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(17.0) write-amplify(7.9) OK, records in: 4809, records dropped: 518 output_compression: NoCompression
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.387602) EVENT_LOG_v1 {"time_micros": 1765014556387591, "job": 12, "event": "compaction_finished", "compaction_time_micros": 152069, "compaction_time_cpu_micros": 27850, "output_level": 6, "num_output_files": 1, "total_output_size": 14024825, "num_input_records": 4809, "num_output_records": 4291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556388125, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014556391073, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.233671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:49:16.391284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:49:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:16.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:49:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:17 compute-0 ceph-mon[74327]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094919 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:49:19 compute-0 ceph-mon[74327]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:20 compute-0 sshd-session[140204]: Accepted publickey for zuul from 192.168.122.30 port 36662 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:49:20 compute-0 systemd-logind[795]: New session 49 of user zuul.
Dec 06 09:49:20 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 06 09:49:20 compute-0 sshd-session[140204]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:49:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:20 compute-0 sudo[140357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enwrljdhvncvjchgkmmfiettmjmgwvss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014560.2984848-26-263164353180427/AnsiballZ_file.py'
Dec 06 09:49:20 compute-0 sudo[140357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:49:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:49:20 compute-0 python3.9[140359]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:20 compute-0 sudo[140357]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:21 compute-0 ceph-mon[74327]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:21.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:21 compute-0 sudo[140511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbrfamoydojinqmoxtbhamobkpiahspw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014561.1432085-62-111105369052718/AnsiballZ_stat.py'
Dec 06 09:49:21 compute-0 sudo[140511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:21 compute-0 python3.9[140513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:21 compute-0 sudo[140511]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:22 compute-0 sudo[140634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzfykmkmctxopjxlbfezcdglujalqbjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014561.1432085-62-111105369052718/AnsiballZ_copy.py'
Dec 06 09:49:22 compute-0 sudo[140634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:22 compute-0 python3.9[140636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014561.1432085-62-111105369052718/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=944de880f37676f80f6e04a4864888bf3f7decbf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:22 compute-0 sudo[140634]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:23 compute-0 sudo[140787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfmplrdyhqvcdnlzmsorrmobqcegidhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014562.848916-62-177836066296549/AnsiballZ_stat.py'
Dec 06 09:49:23 compute-0 sudo[140787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:23 compute-0 ceph-mon[74327]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:23 compute-0 python3.9[140789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:23 compute-0 sudo[140787]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:23.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:23.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:23 compute-0 sudo[140911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzytdqiplhoazxhxliyuczayrjjvgpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014562.848916-62-177836066296549/AnsiballZ_copy.py'
Dec 06 09:49:23 compute-0 sudo[140911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:49:23
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.nfs', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:49:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:49:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Dec 06 09:49:23 compute-0 python3.9[140913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014562.848916-62-177836066296549/.source.conf _original_basename=ceph.conf follow=False checksum=531c84d7e2c99e4f6cf7d56dd7b16abeaf31bfa1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:49:23 compute-0 sudo[140911]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:49:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:24 compute-0 sshd-session[140207]: Connection closed by 192.168.122.30 port 36662
Dec 06 09:49:24 compute-0 sshd-session[140204]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:49:24 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 06 09:49:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:24 compute-0 systemd[1]: session-49.scope: Consumed 3.144s CPU time.
Dec 06 09:49:24 compute-0 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Dec 06 09:49:24 compute-0 systemd-logind[795]: Removed session 49.
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:49:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:49:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:25 compute-0 ceph-mon[74327]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Dec 06 09:49:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:49:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:26 compute-0 sudo[140940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:49:26 compute-0 sudo[140940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:26 compute-0 sudo[140940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:26.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:49:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:26.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:49:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:27 compute-0 ceph-mon[74327]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:49:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:49:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:29 compute-0 ceph-mon[74327]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:30 compute-0 sshd-session[140969]: Accepted publickey for zuul from 192.168.122.30 port 59154 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:49:30 compute-0 systemd-logind[795]: New session 50 of user zuul.
Dec 06 09:49:30 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 06 09:49:30 compute-0 sshd-session[140969]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:49:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180041e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:49:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:49:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:49:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec 06 09:49:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:31 compute-0 python3.9[141122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:49:31 compute-0 ceph-mon[74327]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:32 compute-0 sudo[141278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzidrtaqulbfuonklswexdwpcidjridf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014571.6829548-62-96712390796264/AnsiballZ_file.py'
Dec 06 09:49:32 compute-0 sudo[141278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:32 compute-0 python3.9[141280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:32 compute-0 sudo[141278]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:32 compute-0 ceph-mon[74327]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:49:32 compute-0 sudo[141430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskbhfszrtisytjqmjjvfukgwejqzjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014572.4552038-62-142212325006564/AnsiballZ_file.py'
Dec 06 09:49:32 compute-0 sudo[141430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:32 compute-0 python3.9[141432]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:49:32 compute-0 sudo[141430]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:33.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:49:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:33 compute-0 python3.9[141584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:49:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 937 B/s wr, 3 op/s
Dec 06 09:49:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:34 compute-0 sudo[141734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbblwajxdayihkljsrqcokiveypfcltd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014574.0796216-131-49322260055960/AnsiballZ_seboolean.py'
Dec 06 09:49:34 compute-0 sudo[141734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:34 compute-0 python3.9[141736]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 09:49:35 compute-0 ceph-mon[74327]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 937 B/s wr, 3 op/s
Dec 06 09:49:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:49:36 compute-0 sudo[141734]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:36 compute-0 sudo[141892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzmofrrmptheoxzyqhjolsjwtmgdnweh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014576.4895186-161-215309427891530/AnsiballZ_setup.py'
Dec 06 09:49:36 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 06 09:49:36 compute-0 sudo[141892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:36.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:49:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:49:37 compute-0 python3.9[141894]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:49:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:37 compute-0 ceph-mon[74327]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:49:37 compute-0 sudo[141892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:37.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:49:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 21.58 MB, 0.04 MB/s
                                           Interval WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 09:49:37 compute-0 sudo[141978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iigufxkgbmozenoefsbuwfabevbcdkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014576.4895186-161-215309427891530/AnsiballZ_dnf.py'
Dec 06 09:49:37 compute-0 sudo[141978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:49:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:49:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:49:37 compute-0 python3.9[141980]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:49:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:49:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/094939 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:49:39 compute-0 ceph-mon[74327]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:49:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:39 compute-0 sudo[141978]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:39.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:40 compute-0 sudo[142133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufwewjhwibqcrxgydiroxqrxcavvkjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014579.6853526-197-194610872304090/AnsiballZ_systemd.py'
Dec 06 09:49:40 compute-0 sudo[142133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:40 compute-0 python3.9[142135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:49:40 compute-0 sudo[142133]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:49:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:49:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:41 compute-0 ceph-mon[74327]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:41 compute-0 sudo[142290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evsvivypcddedsrzbnhgtopaczcnqepp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014581.0037935-221-79721647891712/AnsiballZ_edpm_nftables_snippet.py'
Dec 06 09:49:41 compute-0 sudo[142290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:41.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:41.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:41 compute-0 python3[142292]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 06 09:49:41 compute-0 sudo[142290]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:42 compute-0 sudo[142442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzrpuqjcwfxinzxusowsvtrfzfcdpva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014582.0506852-248-91121290865945/AnsiballZ_file.py'
Dec 06 09:49:42 compute-0 sudo[142442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:42 compute-0 python3.9[142444]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:42 compute-0 sudo[142442]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:43 compute-0 sudo[142595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxhwysdcwldmqumkasdlbpdaxeitljuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014582.7313802-272-9381951776459/AnsiballZ_stat.py'
Dec 06 09:49:43 compute-0 sudo[142595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:43 compute-0 ceph-mon[74327]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:43 compute-0 python3.9[142597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:43 compute-0 sudo[142595]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:43 compute-0 sudo[142674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtscjxctiedfzkyemyphacemudypgol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014582.7313802-272-9381951776459/AnsiballZ_file.py'
Dec 06 09:49:43 compute-0 sudo[142674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:43.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:43 compute-0 python3.9[142676]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:43 compute-0 sudo[142674]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:44 compute-0 sudo[142826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntgvxfgmmqctgnppobsboauweizufnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014584.0180051-308-188124659839033/AnsiballZ_stat.py'
Dec 06 09:49:44 compute-0 sudo[142826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:44 compute-0 python3.9[142828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:44 compute-0 sudo[142826]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:44 compute-0 sudo[142904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okjfvdlgqfeenskpdgatyuhegwzfcdxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014584.0180051-308-188124659839033/AnsiballZ_file.py'
Dec 06 09:49:44 compute-0 sudo[142904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:44 compute-0 python3.9[142906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.oxc8qy1t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:44 compute-0 sudo[142904]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:45 compute-0 ceph-mon[74327]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:49:45 compute-0 sudo[143058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvsvlwdyzbvziicbblbwhjipdontlwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014585.20525-344-118112442539396/AnsiballZ_stat.py'
Dec 06 09:49:45 compute-0 sudo[143058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:45.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:45 compute-0 python3.9[143060]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:45 compute-0 sudo[143058]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:45.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:45 compute-0 sudo[143136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kprgwjbzkwjxonfbjpditznsynvlbyfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014585.20525-344-118112442539396/AnsiballZ_file.py'
Dec 06 09:49:45 compute-0 sudo[143136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:49:46 compute-0 python3.9[143138]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:46 compute-0 sudo[143136]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:46 compute-0 sudo[143251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:49:46 compute-0 sudo[143251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:46 compute-0 sudo[143251]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:46 compute-0 sudo[143313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzcmtxeqdqhfhcsiuqgozbduvzdphsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014586.5433276-383-132016889551279/AnsiballZ_command.py'
Dec 06 09:49:46 compute-0 sudo[143313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:49:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:49:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:49:47 compute-0 python3.9[143315]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:49:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:47 compute-0 sudo[143313]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:47 compute-0 ceph-mon[74327]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:49:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:47 compute-0 sudo[143470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqvcakggbfegzyuaxcrbakhisqcvjdkz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014587.345701-407-219310975208280/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 09:49:47 compute-0 sudo[143470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:47 compute-0 python3[143472]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:49:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:49:47 compute-0 sudo[143470]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:48 compute-0 sudo[143622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjmxpcmjitxsvjbtafrhalvdjuocahc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014588.2157028-431-20748854031508/AnsiballZ_stat.py'
Dec 06 09:49:48 compute-0 sudo[143622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:48 compute-0 python3.9[143624]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:48 compute-0 sudo[143622]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:49 compute-0 sudo[143748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtycsdlseczfcnzockydnrqsvowuulob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014588.2157028-431-20748854031508/AnsiballZ_copy.py'
Dec 06 09:49:49 compute-0 sudo[143748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:49 compute-0 ceph-mon[74327]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:49:49 compute-0 python3.9[143750]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014588.2157028-431-20748854031508/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:49 compute-0 sudo[143748]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:49.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:49 compute-0 sudo[143901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvgcpqyrveaieqjbtbjxreiorcecmrpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014589.6290662-476-171515448248764/AnsiballZ_stat.py'
Dec 06 09:49:49 compute-0 sudo[143901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:50 compute-0 python3.9[143903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:50 compute-0 sudo[143901]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:50 compute-0 sudo[144026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxjktineqviggbilpqlqytmnsesehdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014589.6290662-476-171515448248764/AnsiballZ_copy.py'
Dec 06 09:49:50 compute-0 sudo[144026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:50 compute-0 python3.9[144028]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014589.6290662-476-171515448248764/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:50 compute-0 sudo[144026]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:49:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:49:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:51 compute-0 ceph-mon[74327]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:51 compute-0 sudo[144180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohsfqmrpxkelbxfaelbbqqsqyqjbdfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014591.0313091-521-1512425184116/AnsiballZ_stat.py'
Dec 06 09:49:51 compute-0 sudo[144180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:51 compute-0 python3.9[144182]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:51 compute-0 sudo[144180]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:51 compute-0 sudo[144305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueabkcddpicoveuftdmviaspdacahjrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014591.0313091-521-1512425184116/AnsiballZ_copy.py'
Dec 06 09:49:51 compute-0 sudo[144305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:52 compute-0 python3.9[144307]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014591.0313091-521-1512425184116/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:52 compute-0 sudo[144305]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:52 compute-0 sudo[144457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtkozhvqshumqeafhgfghlbtdwbqsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014592.301395-566-199565958401142/AnsiballZ_stat.py'
Dec 06 09:49:52 compute-0 sudo[144457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:52 compute-0 python3.9[144459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:52 compute-0 sudo[144457]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:53 compute-0 sudo[144583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvludpsqbawthmiwehkpxkzmydkznnsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014592.301395-566-199565958401142/AnsiballZ_copy.py'
Dec 06 09:49:53 compute-0 sudo[144583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:53 compute-0 python3.9[144585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014592.301395-566-199565958401142/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:53 compute-0 sudo[144583]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:53 compute-0 ceph-mon[74327]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:53.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:53.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:49:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:49:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:49:54 compute-0 sudo[144736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqbnnpabptqbpuoexzzjnnvbjxwfivjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014593.6583588-611-58609445119118/AnsiballZ_stat.py'
Dec 06 09:49:54 compute-0 sudo[144736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:54 compute-0 python3.9[144738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:49:54 compute-0 sudo[144736]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:49:54 compute-0 sudo[144863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-runwjoeoiuefnybplagecmcxwyrxejms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014593.6583588-611-58609445119118/AnsiballZ_copy.py'
Dec 06 09:49:54 compute-0 sudo[144863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:54 compute-0 python3.9[144865]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014593.6583588-611-58609445119118/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:54 compute-0 sudo[144863]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:55 compute-0 sudo[145017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejoyoyzgyxupjfgfmiqkftunywmvthzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014595.1267292-656-102493777718501/AnsiballZ_file.py'
Dec 06 09:49:55 compute-0 sudo[145017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:55 compute-0 ceph-mon[74327]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:55.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:55 compute-0 python3.9[145019]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:55 compute-0 sudo[145017]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:55 compute-0 sshd-session[144741]: Invalid user user from 78.128.112.74 port 47130
Dec 06 09:49:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:55 compute-0 sshd-session[144741]: Connection closed by invalid user user 78.128.112.74 port 47130 [preauth]
Dec 06 09:49:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:56 compute-0 sudo[145169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kayhoiteehwppecyzbzgfamhbbfbynrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014595.8793569-680-66873549764828/AnsiballZ_command.py'
Dec 06 09:49:56 compute-0 sudo[145169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:56 compute-0 python3.9[145171]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:49:56 compute-0 sudo[145169]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:56 compute-0 ceph-mon[74327]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:49:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:49:57.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:49:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:57 compute-0 sudo[145325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssdhellvwygzabkbplhadjwgoocdydai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014596.69292-704-256433257717091/AnsiballZ_blockinfile.py'
Dec 06 09:49:57 compute-0 sudo[145325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:57 compute-0 python3.9[145327]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:49:57 compute-0 sudo[145325]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:57 compute-0 sudo[145404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:49:57 compute-0 sudo[145404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:57 compute-0 sudo[145404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:57 compute-0 sudo[145450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:49:57 compute-0 sudo[145450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:57 compute-0 sudo[145528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietawonvsmekwstndmvxjvocyusorglp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014597.6867766-731-251972327697983/AnsiballZ_command.py'
Dec 06 09:49:57 compute-0 sudo[145528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:58 compute-0 python3.9[145530]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:49:58 compute-0 sudo[145528]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:58 compute-0 sudo[145450]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:58 compute-0 sudo[145714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcfejaotsvqvjveaptsfbhkxixynudf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014598.4421222-755-108230436928567/AnsiballZ_stat.py'
Dec 06 09:49:58 compute-0 sudo[145714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:49:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:49:58 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:49:58 compute-0 python3.9[145716]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:49:58 compute-0 sudo[145714]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:58 compute-0 sudo[145717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:49:58 compute-0 sudo[145717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:58 compute-0 sudo[145717]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:59 compute-0 sudo[145744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:49:59 compute-0 sudo[145744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:49:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:59 compute-0 ceph-mon[74327]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:49:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:49:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.463688147 +0000 UTC m=+0.046695685 container create 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:49:59 compute-0 systemd[1]: Started libpod-conmon-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope.
Dec 06 09:49:59 compute-0 sudo[145975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqrtbragnfeaqqnaeybglyhthdwrcgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014599.1881607-779-220426787317353/AnsiballZ_command.py'
Dec 06 09:49:59 compute-0 sudo[145975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.443405645 +0000 UTC m=+0.026413233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:49:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:49:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:49:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:49:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:49:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:49:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.651215304 +0000 UTC m=+0.234222902 container init 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.664702938 +0000 UTC m=+0.247710486 container start 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:49:59 compute-0 youthful_torvalds[145977]: 167 167
Dec 06 09:49:59 compute-0 systemd[1]: libpod-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope: Deactivated successfully.
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.676885798 +0000 UTC m=+0.259893366 container attach 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.678069208 +0000 UTC m=+0.261076746 container died 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:49:59 compute-0 python3.9[145978]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6937766aa5949b2b3803fb16e76de0a3cb41aa137e4e303bd57838871dd8e1db-merged.mount: Deactivated successfully.
Dec 06 09:49:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:49:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:49:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:49:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:49:59 compute-0 sudo[145975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:59 compute-0 podman[145922]: 2025-12-06 09:49:59.809837093 +0000 UTC m=+0.392844641 container remove 53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:49:59 compute-0 systemd[1]: libpod-conmon-53d750f6f2cbe1c4c92fba77dc59e0a06fc8551d47148db2cb01a34d183e8c87.scope: Deactivated successfully.
Dec 06 09:49:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.005294868 +0000 UTC m=+0.044827726 container create 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:50:00 compute-0 systemd[1]: Started libpod-conmon-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope.
Dec 06 09:50:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:49:59.985016157 +0000 UTC m=+0.024549035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.094789015 +0000 UTC m=+0.134321963 container init 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.104195641 +0000 UTC m=+0.143728499 container start 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.10797734 +0000 UTC m=+0.147510198 container attach 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:50:00 compute-0 ceph-mon[74327]: overall HEALTH_OK
Dec 06 09:50:00 compute-0 sudo[146177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getpjsgplarzavhexrnzfcsgtktvtzfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014599.9390666-803-221398952411635/AnsiballZ_file.py'
Dec 06 09:50:00 compute-0 sudo[146177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:00 compute-0 python3.9[146179]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:00 compute-0 sudo[146177]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:00 compute-0 amazing_ishizaka[146120]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:50:00 compute-0 amazing_ishizaka[146120]: --> All data devices are unavailable
Dec 06 09:50:00 compute-0 systemd[1]: libpod-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope: Deactivated successfully.
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.522441667 +0000 UTC m=+0.561974525 container died 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dda975b5420da00c8448e47d324f03a6f5ae14aede0d2dba8f4f5a91713f9454-merged.mount: Deactivated successfully.
Dec 06 09:50:00 compute-0 podman[146056]: 2025-12-06 09:50:00.62205034 +0000 UTC m=+0.661583228 container remove 5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:50:00 compute-0 systemd[1]: libpod-conmon-5d4c96d6e7cea7fdcc2ca55f73bcddae0af2c6a5ae041a5bd06bf0783593b713.scope: Deactivated successfully.
Dec 06 09:50:00 compute-0 sudo[145744]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:00 compute-0 sudo[146229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:50:00 compute-0 sudo[146229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:00 compute-0 sudo[146229]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:00 compute-0 sudo[146254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:50:00 compute-0 sudo[146254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec 06 09:50:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Dec 06 09:50:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.207736876 +0000 UTC m=+0.023436015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:50:01 compute-0 ceph-mon[74327]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.458380538 +0000 UTC m=+0.274079717 container create 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:50:01 compute-0 systemd[1]: Started libpod-conmon-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope.
Dec 06 09:50:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.55720842 +0000 UTC m=+0.372907559 container init 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:50:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.564791068 +0000 UTC m=+0.380490187 container start 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.568304031 +0000 UTC m=+0.384003160 container attach 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:50:01 compute-0 gallant_chatelet[146464]: 167 167
Dec 06 09:50:01 compute-0 systemd[1]: libpod-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope: Deactivated successfully.
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.57095022 +0000 UTC m=+0.386649349 container died 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6990fa6707aa1de6810e53c68f35a85734b6733509beec674fc5969e21738847-merged.mount: Deactivated successfully.
Dec 06 09:50:01 compute-0 podman[146373]: 2025-12-06 09:50:01.615089987 +0000 UTC m=+0.430789126 container remove 64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:50:01 compute-0 systemd[1]: libpod-conmon-64214105146ad613da457d53b142ffb0c11bed536f910af7cd8c92389e06feec.scope: Deactivated successfully.
Dec 06 09:50:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:01 compute-0 python3.9[146461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:50:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:01.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:01 compute-0 podman[146490]: 2025-12-06 09:50:01.815614175 +0000 UTC m=+0.089160559 container create fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 09:50:01 compute-0 podman[146490]: 2025-12-06 09:50:01.751831622 +0000 UTC m=+0.025378016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:50:01 compute-0 systemd[1]: Started libpod-conmon-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope.
Dec 06 09:50:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:01 compute-0 podman[146490]: 2025-12-06 09:50:01.920517715 +0000 UTC m=+0.194064139 container init fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:50:01 compute-0 podman[146490]: 2025-12-06 09:50:01.929877451 +0000 UTC m=+0.203423865 container start fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:50:01 compute-0 podman[146490]: 2025-12-06 09:50:01.941612089 +0000 UTC m=+0.215158493 container attach fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:50:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:02 compute-0 loving_jennings[146530]: {
Dec 06 09:50:02 compute-0 loving_jennings[146530]:     "1": [
Dec 06 09:50:02 compute-0 loving_jennings[146530]:         {
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "devices": [
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "/dev/loop3"
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             ],
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "lv_name": "ceph_lv0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "lv_size": "21470642176",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "name": "ceph_lv0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "tags": {
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.cluster_name": "ceph",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.crush_device_class": "",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.encrypted": "0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.osd_id": "1",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.type": "block",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.vdo": "0",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:                 "ceph.with_tpm": "0"
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             },
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "type": "block",
Dec 06 09:50:02 compute-0 loving_jennings[146530]:             "vg_name": "ceph_vg0"
Dec 06 09:50:02 compute-0 loving_jennings[146530]:         }
Dec 06 09:50:02 compute-0 loving_jennings[146530]:     ]
Dec 06 09:50:02 compute-0 loving_jennings[146530]: }
Dec 06 09:50:02 compute-0 systemd[1]: libpod-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope: Deactivated successfully.
Dec 06 09:50:02 compute-0 podman[146490]: 2025-12-06 09:50:02.247514099 +0000 UTC m=+0.521060483 container died fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6906b50ca970bb4417f9b6f521662f12e7762a3747266f2c87ed36a1d55abac2-merged.mount: Deactivated successfully.
Dec 06 09:50:02 compute-0 podman[146490]: 2025-12-06 09:50:02.293172096 +0000 UTC m=+0.566718460 container remove fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:50:02 compute-0 systemd[1]: libpod-conmon-fc02f1e3c05ac4daa21d52d33e3bd988bd3ab634f3acdc07825b8cfd384f4ce9.scope: Deactivated successfully.
Dec 06 09:50:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:02 compute-0 sudo[146254]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:02 compute-0 sudo[146550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:50:02 compute-0 sudo[146550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:02 compute-0 sudo[146550]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:02 compute-0 sudo[146575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:50:02 compute-0 sudo[146575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:02 compute-0 ceph-mon[74327]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:02 compute-0 sudo[146755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akpsstxzbxvbsgzqrzkjhovgypfbrrup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014602.4806569-923-232417918412306/AnsiballZ_command.py'
Dec 06 09:50:02 compute-0 sudo[146755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:02 compute-0 podman[146769]: 2025-12-06 09:50:02.901569049 +0000 UTC m=+0.049540040 container create 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:50:02 compute-0 systemd[1]: Started libpod-conmon-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope.
Dec 06 09:50:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:02 compute-0 python3.9[146762]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:02 compute-0 podman[146769]: 2025-12-06 09:50:02.882644093 +0000 UTC m=+0.030615084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:50:02 compute-0 podman[146769]: 2025-12-06 09:50:02.986439884 +0000 UTC m=+0.134410875 container init 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:50:02 compute-0 ovs-vsctl[146788]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 06 09:50:02 compute-0 podman[146769]: 2025-12-06 09:50:02.995248255 +0000 UTC m=+0.143219226 container start 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:50:03 compute-0 funny_hugle[146785]: 167 167
Dec 06 09:50:03 compute-0 systemd[1]: libpod-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope: Deactivated successfully.
Dec 06 09:50:03 compute-0 podman[146769]: 2025-12-06 09:50:03.00421039 +0000 UTC m=+0.152181381 container attach 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:50:03 compute-0 podman[146769]: 2025-12-06 09:50:03.00460679 +0000 UTC m=+0.152577761 container died 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 09:50:03 compute-0 sudo[146755]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c35d22a53b864f392bc466fd8827324cca5e67b348e334072e578f0332ecc931-merged.mount: Deactivated successfully.
Dec 06 09:50:03 compute-0 podman[146769]: 2025-12-06 09:50:03.047042784 +0000 UTC m=+0.195013755 container remove 136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hugle, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:50:03 compute-0 systemd[1]: libpod-conmon-136ed10f45d5af8ece2658ec3b070f46ad01e9438d57bae999c48e61c739e854.scope: Deactivated successfully.
Dec 06 09:50:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:03 compute-0 podman[146834]: 2025-12-06 09:50:03.202087529 +0000 UTC m=+0.050009073 container create 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:50:03 compute-0 systemd[1]: Started libpod-conmon-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope.
Dec 06 09:50:03 compute-0 podman[146834]: 2025-12-06 09:50:03.179009164 +0000 UTC m=+0.026930698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:50:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:03 compute-0 podman[146834]: 2025-12-06 09:50:03.307115512 +0000 UTC m=+0.155037076 container init 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 09:50:03 compute-0 podman[146834]: 2025-12-06 09:50:03.314526397 +0000 UTC m=+0.162447951 container start 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:50:03 compute-0 podman[146834]: 2025-12-06 09:50:03.320941245 +0000 UTC m=+0.168862809 container attach 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:50:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:03 compute-0 sudo[147014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfemkuzhgnheggyufcholfvnjfheqosu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014603.3980088-950-45274832107248/AnsiballZ_command.py'
Dec 06 09:50:03 compute-0 sudo[147014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:03 compute-0 python3.9[147023]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:03 compute-0 sudo[147014]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:03 compute-0 lvm[147057]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:50:03 compute-0 lvm[147057]: VG ceph_vg0 finished
Dec 06 09:50:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:03 compute-0 lvm[147083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:50:03 compute-0 lvm[147083]: VG ceph_vg0 finished
Dec 06 09:50:04 compute-0 agitated_chebyshev[146852]: {}
Dec 06 09:50:04 compute-0 systemd[1]: libpod-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Deactivated successfully.
Dec 06 09:50:04 compute-0 podman[146834]: 2025-12-06 09:50:04.034358251 +0000 UTC m=+0.882279785 container died 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:50:04 compute-0 systemd[1]: libpod-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Consumed 1.171s CPU time.
Dec 06 09:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab2955ddc6b6797ff1af59075fc9e8789b9d23a05616e65f3707c836419f1750-merged.mount: Deactivated successfully.
Dec 06 09:50:04 compute-0 podman[146834]: 2025-12-06 09:50:04.074551014 +0000 UTC m=+0.922472548 container remove 8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 09:50:04 compute-0 systemd[1]: libpod-conmon-8840c73f6d39f37c88713689da6dc525c3700a5bc443242e58f1769bf8e87f07.scope: Deactivated successfully.
Dec 06 09:50:04 compute-0 sudo[146575]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:50:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:50:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:50:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:50:04 compute-0 sudo[147148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:50:04 compute-0 sudo[147148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:04 compute-0 sudo[147148]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:04 compute-0 sudo[147246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwzphdfzryihxbedkrbtysrrijyawrtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014604.0928123-974-96141734482090/AnsiballZ_command.py'
Dec 06 09:50:04 compute-0 sudo[147246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:04 compute-0 python3.9[147248]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:04 compute-0 ovs-vsctl[147249]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 06 09:50:04 compute-0 sudo[147246]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:05 compute-0 ceph-mon[74327]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:50:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:50:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:05 compute-0 python3.9[147400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:50:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:05.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:05 compute-0 sudo[147553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceglwipqomspbvhsqvrdbxekwcsslqiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014605.5875516-1025-218360410604520/AnsiballZ_file.py'
Dec 06 09:50:05 compute-0 sudo[147553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:06 compute-0 python3.9[147555]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:06 compute-0 sudo[147553]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:06 compute-0 sudo[147705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmzewcibbhjydkrissgakrjrachzjzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014606.3037934-1049-154820775447028/AnsiballZ_stat.py'
Dec 06 09:50:06 compute-0 sudo[147705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:06 compute-0 python3.9[147707]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:06 compute-0 sudo[147705]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:07.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:50:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:50:07 compute-0 sudo[147733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:50:07 compute-0 sudo[147733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:07 compute-0 sudo[147733]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:07 compute-0 sudo[147809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntxdanmoethrwszxiodyhnxanlxwpjoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014606.3037934-1049-154820775447028/AnsiballZ_file.py'
Dec 06 09:50:07 compute-0 sudo[147809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:07 compute-0 ceph-mon[74327]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:07 compute-0 python3.9[147811]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:07 compute-0 sudo[147809]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:07.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:07.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:07 compute-0 sudo[147962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbgqwdlvhvgmgjrgbhhxhqajsjkuhnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014607.4666536-1049-134298272243294/AnsiballZ_stat.py'
Dec 06 09:50:07 compute-0 sudo[147962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:07 compute-0 python3.9[147964]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:50:08 compute-0 sudo[147962]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:08 compute-0 sudo[148040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxkrinscjfqwtcndiaxtqxykbctuipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014607.4666536-1049-134298272243294/AnsiballZ_file.py'
Dec 06 09:50:08 compute-0 sudo[148040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:08 compute-0 python3.9[148042]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:08 compute-0 sudo[148040]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:08 compute-0 sudo[148192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktieftyjkwckdpprmlrqwbmkvqqoxpvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014608.574425-1118-123051050796444/AnsiballZ_file.py'
Dec 06 09:50:08 compute-0 sudo[148192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:50:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:09 compute-0 python3.9[148194]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:09 compute-0 sudo[148192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:09 compute-0 ceph-mon[74327]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:50:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:09 compute-0 sudo[148346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asgtrjxyuvuqrxqtyflxuhqdumtusnvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014609.3743637-1142-154303639269307/AnsiballZ_stat.py'
Dec 06 09:50:09 compute-0 sudo[148346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:09.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:09 compute-0 python3.9[148348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:09 compute-0 sudo[148346]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:10 compute-0 sudo[148424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuxzgvadsimzxpnrwgetbrrtevetnyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014609.3743637-1142-154303639269307/AnsiballZ_file.py'
Dec 06 09:50:10 compute-0 sudo[148424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:10 compute-0 python3.9[148426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:10 compute-0 sudo[148424]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:10 compute-0 sudo[148576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyryqlqiwfoxnvfewxtptfjvdovqrncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014610.504701-1178-204858622316313/AnsiballZ_stat.py'
Dec 06 09:50:10 compute-0 sudo[148576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:10 compute-0 python3.9[148578]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:11 compute-0 sudo[148576]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:11 compute-0 ceph-mon[74327]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:11 compute-0 sudo[148656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmdfypygraxqpmbtrplzfwsslkdvrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014610.504701-1178-204858622316313/AnsiballZ_file.py'
Dec 06 09:50:11 compute-0 sudo[148656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:11 compute-0 python3.9[148658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:11 compute-0 sudo[148656]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 09:50:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 09:50:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:12 compute-0 sudo[148808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhkcthoyqcevmgejffdvewghbwmflqkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014611.7895231-1214-131971283683602/AnsiballZ_systemd.py'
Dec 06 09:50:12 compute-0 sudo[148808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:12 compute-0 python3.9[148810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:50:12 compute-0 systemd[1]: Reloading.
Dec 06 09:50:12 compute-0 systemd-rc-local-generator[148838]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:50:12 compute-0 systemd-sysv-generator[148841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:50:12 compute-0 sudo[148808]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:13 compute-0 ceph-mon[74327]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:13 compute-0 sudo[148999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdgoyuybpatwvrumjwmuopaorifdpjoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014613.1665223-1238-16972382986817/AnsiballZ_stat.py'
Dec 06 09:50:13 compute-0 sudo[148999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:13 compute-0 python3.9[149001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:13 compute-0 sudo[148999]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:13.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:13 compute-0 sudo[149077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhcvaufizfuxtarikovyrbfumlyqkcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014613.1665223-1238-16972382986817/AnsiballZ_file.py'
Dec 06 09:50:13 compute-0 sudo[149077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:14 compute-0 python3.9[149079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:14 compute-0 sudo[149077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:14 compute-0 sudo[149229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkbbfnbbhvtdobokdvekjpnybonwfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014614.4689026-1274-218710198983288/AnsiballZ_stat.py'
Dec 06 09:50:14 compute-0 sudo[149229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:14 compute-0 python3.9[149231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:14 compute-0 sudo[149229]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:15 compute-0 sudo[149308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolgheduuvwqvwrikiluawclbkhvbair ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014614.4689026-1274-218710198983288/AnsiballZ_file.py'
Dec 06 09:50:15 compute-0 sudo[149308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:15 compute-0 python3.9[149310]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:15 compute-0 sudo[149308]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:15 compute-0 ceph-mon[74327]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:15.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:15 compute-0 sudo[149461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdaggmfppahwavdogbqzryrgzywbumcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014615.6044962-1310-119355699177552/AnsiballZ_systemd.py'
Dec 06 09:50:15 compute-0 sudo[149461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:16 compute-0 python3.9[149463]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:50:16 compute-0 systemd[1]: Reloading.
Dec 06 09:50:16 compute-0 systemd-rc-local-generator[149486]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:50:16 compute-0 systemd-sysv-generator[149489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:50:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:16 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 09:50:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:50:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:50:16 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 09:50:16 compute-0 sudo[149461]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:50:17 compute-0 sudo[149656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtidscshuamspgyldufstbkzndqkvdar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014616.8721564-1340-150866693317187/AnsiballZ_file.py'
Dec 06 09:50:17 compute-0 sudo[149656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:17 compute-0 python3.9[149658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:17 compute-0 ceph-mon[74327]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:17 compute-0 sudo[149656]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:17 compute-0 sudo[149809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgonkfylbunedogdmyyblvawkgxwyzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014617.5529356-1364-276889325439279/AnsiballZ_stat.py'
Dec 06 09:50:17 compute-0 sudo[149809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:50:18 compute-0 python3.9[149811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:18 compute-0 sudo[149809]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:18 compute-0 sudo[149932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfxpqnojdiduwtgdukxkygerzblvqxyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014617.5529356-1364-276889325439279/AnsiballZ_copy.py'
Dec 06 09:50:18 compute-0 sudo[149932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:18 compute-0 python3.9[149934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014617.5529356-1364-276889325439279/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:18 compute-0 sudo[149932]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095019 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:50:19 compute-0 sudo[150089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhktksrdotjdyfmuexarodmofwpslgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014619.0283005-1415-137937346757050/AnsiballZ_file.py'
Dec 06 09:50:19 compute-0 sudo[150089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:19 compute-0 ceph-mon[74327]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:50:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:19 compute-0 python3.9[150091]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:19 compute-0 sudo[150089]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:19 compute-0 sudo[150241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccoyjszkrxorkchvwxywdsnfyjlhdiui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014619.7056198-1439-207679309671299/AnsiballZ_stat.py'
Dec 06 09:50:19 compute-0 sudo[150241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:20 compute-0 python3.9[150243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:20 compute-0 sudo[150241]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300037e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:20 compute-0 sudo[150364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqwfqltrtqthkcqzarzrtoqpxmmrzrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014619.7056198-1439-207679309671299/AnsiballZ_copy.py'
Dec 06 09:50:20 compute-0 sudo[150364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:20 compute-0 python3.9[150366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014619.7056198-1439-207679309671299/.source.json _original_basename=.mdcixkao follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:20 compute-0 sudo[150364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:21 compute-0 sudo[150517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmakxxorcutfprcbpvrboafvhcsmrjvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014620.9290466-1484-200817983038205/AnsiballZ_file.py'
Dec 06 09:50:21 compute-0 sudo[150517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:21 compute-0 ceph-mon[74327]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:21 compute-0 python3.9[150519]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:21 compute-0 sudo[150517]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:21.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:21 compute-0 sudo[150670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcruajmjwjmbswtwcuayzvuazqiptdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014621.648771-1508-157449398070879/AnsiballZ_stat.py'
Dec 06 09:50:21 compute-0 sudo[150670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:22 compute-0 sudo[150670]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:22 compute-0 sudo[150793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtmejlgqxhkyemzhmmxwbzjisgkuxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014621.648771-1508-157449398070879/AnsiballZ_copy.py'
Dec 06 09:50:22 compute-0 sudo[150793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:22 compute-0 sudo[150793]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:23 compute-0 ceph-mon[74327]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:23 compute-0 sudo[150947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzhawysgockcmkzlrzxqqbhdlrniewkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014623.1091282-1559-16195537417729/AnsiballZ_container_config_data.py'
Dec 06 09:50:23 compute-0 sudo[150947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 09:50:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 09:50:23 compute-0 python3.9[150949]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 06 09:50:23 compute-0 sudo[150947]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:23.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:50:23
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes', '.nfs', 'vms', 'default.rgw.control']
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:50:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:50:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:23 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:50:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:50:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:50:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:24 compute-0 sudo[151099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbkwsogopnpusyaquviwhmepkbtwlfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014624.1651402-1586-47603911726077/AnsiballZ_container_config_hash.py'
Dec 06 09:50:24 compute-0 sudo[151099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:24 compute-0 python3.9[151101]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:50:24 compute-0 sudo[151099]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:25 compute-0 ceph-mon[74327]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:25 compute-0 sudo[151253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfxjqydojmmocxxbkanxfyvkmeixbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014625.104868-1613-144952075707931/AnsiballZ_podman_container_info.py'
Dec 06 09:50:25 compute-0 sudo[151253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:25.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:25 compute-0 python3.9[151255]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:50:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:26 compute-0 sudo[151253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:26 compute-0 ceph-mon[74327]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:50:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:50:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:50:27 compute-0 sudo[151360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:50:27 compute-0 sudo[151360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:27 compute-0 sudo[151360]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:27 compute-0 sudo[151459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyevxcnvsatxeymnkuxwbaywsqmvjmnd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014626.9332407-1652-194218311452725/AnsiballZ_edpm_container_manage.py'
Dec 06 09:50:27 compute-0 sudo[151459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018002490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:27 compute-0 python3[151461]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:50:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:27.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Dec 06 09:50:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:29 compute-0 ceph-mon[74327]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Dec 06 09:50:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:29.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:50:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:50:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:50:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:50:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Dec 06 09:50:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:31 compute-0 ceph-mon[74327]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:50:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 09:50:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 09:50:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:31.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:50:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:32 compute-0 podman[151474]: 2025-12-06 09:50:32.838973163 +0000 UTC m=+5.095633200 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 06 09:50:32 compute-0 podman[151596]: 2025-12-06 09:50:32.964792592 +0000 UTC m=+0.046010517 container create ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:50:32 compute-0 podman[151596]: 2025-12-06 09:50:32.939115609 +0000 UTC m=+0.020333544 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 06 09:50:32 compute-0 python3[151461]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 06 09:50:33 compute-0 sudo[151459]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:50:33 compute-0 ceph-mon[74327]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Dec 06 09:50:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:33 compute-0 sudo[151786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qysutjgznqqworxiicnqvyrlksgxcnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014633.4667594-1676-94892728094257/AnsiballZ_stat.py'
Dec 06 09:50:33 compute-0 sudo[151786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:33 compute-0 python3.9[151788]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:50:33 compute-0 sudo[151786]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 937 B/s wr, 3 op/s
Dec 06 09:50:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:34 compute-0 sudo[151940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbbfzhdrqphleyyqcmiftmbcdmrqcego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014634.4554446-1703-179669648536197/AnsiballZ_file.py'
Dec 06 09:50:34 compute-0 sudo[151940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:34 compute-0 python3.9[151942]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:34 compute-0 sudo[151940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:35 compute-0 sudo[152017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyrjekyieuvavvqewdtpsmfkydsxmskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014634.4554446-1703-179669648536197/AnsiballZ_stat.py'
Dec 06 09:50:35 compute-0 sudo[152017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:35 compute-0 python3.9[152019]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:50:35 compute-0 sudo[152017]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:35 compute-0 ceph-mon[74327]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 937 B/s wr, 3 op/s
Dec 06 09:50:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:35.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:35.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:50:36 compute-0 sudo[152169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnnhagcdfjizeanywzsmgivlqofvcilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014635.453923-1703-231319631757280/AnsiballZ_copy.py'
Dec 06 09:50:36 compute-0 sudo[152169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:36 compute-0 python3.9[152171]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014635.453923-1703-231319631757280/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:50:36 compute-0 sudo[152169]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:36 compute-0 sudo[152245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvompbupvttnmpacrgpayaijxnajnbja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014635.453923-1703-231319631757280/AnsiballZ_systemd.py'
Dec 06 09:50:36 compute-0 sudo[152245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:36 compute-0 ceph-mon[74327]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:50:36 compute-0 python3.9[152247]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:50:36 compute-0 systemd[1]: Reloading.
Dec 06 09:50:36 compute-0 systemd-rc-local-generator[152274]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:50:36 compute-0 systemd-sysv-generator[152278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:50:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:37.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:50:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:37.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:50:37 compute-0 sudo[152245]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:37 compute-0 sudo[152358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomujxiveireoylruibijaxhofxkibch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014635.453923-1703-231319631757280/AnsiballZ_systemd.py'
Dec 06 09:50:37 compute-0 sudo[152358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:37 compute-0 python3.9[152360]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:50:37 compute-0 systemd[1]: Reloading.
Dec 06 09:50:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:37.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:37 compute-0 systemd-rc-local-generator[152389]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:50:37 compute-0 systemd-sysv-generator[152393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:50:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:50:38 compute-0 systemd[1]: Starting ovn_controller container...
Dec 06 09:50:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613f756a1f73dc4a39e91ac477d7099b198677971a4e550307612100884cde52/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 09:50:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab.
Dec 06 09:50:38 compute-0 podman[152401]: 2025-12-06 09:50:38.180955131 +0000 UTC m=+0.132340461 container init ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + sudo -E kolla_set_configs
Dec 06 09:50:38 compute-0 podman[152401]: 2025-12-06 09:50:38.211376318 +0000 UTC m=+0.162761658 container start ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:50:38 compute-0 edpm-start-podman-container[152401]: ovn_controller
Dec 06 09:50:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 06 09:50:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 09:50:38 compute-0 edpm-start-podman-container[152400]: Creating additional drop-in dependency for "ovn_controller" (ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab)
Dec 06 09:50:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 09:50:38 compute-0 podman[152424]: 2025-12-06 09:50:38.288571653 +0000 UTC m=+0.066761532 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:50:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 06 09:50:38 compute-0 systemd[1]: ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab-2e149f55b40c15c1.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:50:38 compute-0 systemd[1]: ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab-2e149f55b40c15c1.service: Failed with result 'exit-code'.
Dec 06 09:50:38 compute-0 systemd[1]: Reloading.
Dec 06 09:50:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:38 compute-0 systemd-rc-local-generator[152493]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:50:38 compute-0 systemd-sysv-generator[152496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:50:38 compute-0 systemd[1]: Started ovn_controller container.
Dec 06 09:50:38 compute-0 systemd[152463]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 06 09:50:38 compute-0 sudo[152358]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:38 compute-0 systemd[152463]: Queued start job for default target Main User Target.
Dec 06 09:50:38 compute-0 systemd[152463]: Created slice User Application Slice.
Dec 06 09:50:38 compute-0 systemd[152463]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 09:50:38 compute-0 systemd[152463]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:50:38 compute-0 systemd[152463]: Reached target Paths.
Dec 06 09:50:38 compute-0 systemd[152463]: Reached target Timers.
Dec 06 09:50:38 compute-0 systemd[152463]: Starting D-Bus User Message Bus Socket...
Dec 06 09:50:38 compute-0 systemd[152463]: Starting Create User's Volatile Files and Directories...
Dec 06 09:50:38 compute-0 systemd[152463]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:50:38 compute-0 systemd[152463]: Reached target Sockets.
Dec 06 09:50:38 compute-0 systemd[152463]: Finished Create User's Volatile Files and Directories.
Dec 06 09:50:38 compute-0 systemd[152463]: Reached target Basic System.
Dec 06 09:50:38 compute-0 systemd[152463]: Reached target Main User Target.
Dec 06 09:50:38 compute-0 systemd[152463]: Startup finished in 151ms.
Dec 06 09:50:38 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 06 09:50:38 compute-0 systemd[1]: Started Session c1 of User root.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:50:38 compute-0 ovn_controller[152417]: INFO:__main__:Validating config file
Dec 06 09:50:38 compute-0 ovn_controller[152417]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:50:38 compute-0 ovn_controller[152417]: INFO:__main__:Writing out command to execute
Dec 06 09:50:38 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: ++ cat /run_command
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + ARGS=
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + sudo kolla_copy_cacerts
Dec 06 09:50:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:50:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:38 compute-0 systemd[1]: Started Session c2 of User root.
Dec 06 09:50:38 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + [[ ! -n '' ]]
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + . kolla_extend_start
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + umask 0022
Dec 06 09:50:38 compute-0 ovn_controller[152417]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9654] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9665] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9679] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9687] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9691] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:50:38 compute-0 kernel: br-int: entered promiscuous mode
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00017|main|INFO|OVS feature set changed, force recompute.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:50:38 compute-0 ovn_controller[152417]: 2025-12-06T09:50:38Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9838] manager: (ovn-127282-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9846] manager: (ovn-1b31b2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9918] manager: (ovn-61eba4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 06 09:50:38 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9995] device (genev_sys_6081): carrier: link connected
Dec 06 09:50:38 compute-0 NetworkManager[48882]: <info>  [1765014638.9998] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec 06 09:50:39 compute-0 systemd-udevd[152565]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:50:39 compute-0 systemd-udevd[152566]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:50:39 compute-0 ceph-mon[74327]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 06 09:50:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095039 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:50:39 compute-0 sudo[152684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgoyyvopndmbptyyynseiryawdizixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014639.038385-1787-130446043134850/AnsiballZ_command.py'
Dec 06 09:50:39 compute-0 sudo[152684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:39 compute-0 python3.9[152686]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:39 compute-0 ovs-vsctl[152687]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 06 09:50:39 compute-0 sudo[152684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:39.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:40 compute-0 sudo[152837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlddbosgupxjrttbywzylwbvniocpctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014639.8238375-1811-205890406532206/AnsiballZ_command.py'
Dec 06 09:50:40 compute-0 sudo[152837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:40 compute-0 python3.9[152839]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:40 compute-0 ovs-vsctl[152841]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 06 09:50:40 compute-0 sudo[152837]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:50:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:50:41 compute-0 ceph-mon[74327]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:41 compute-0 sudo[152993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtnacnlmjiqlaovfyafhpcslwetqagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014640.9222684-1853-68573664149560/AnsiballZ_command.py'
Dec 06 09:50:41 compute-0 sudo[152993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:41 compute-0 python3.9[152995]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:50:41 compute-0 ovs-vsctl[152997]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 06 09:50:41 compute-0 sudo[152993]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:41 compute-0 sshd-session[140972]: Connection closed by 192.168.122.30 port 59154
Dec 06 09:50:41 compute-0 sshd-session[140969]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:50:41 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 06 09:50:41 compute-0 systemd[1]: session-50.scope: Consumed 58.239s CPU time.
Dec 06 09:50:41 compute-0 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Dec 06 09:50:41 compute-0 systemd-logind[795]: Removed session 50.
Dec 06 09:50:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:43 compute-0 ceph-mon[74327]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:43.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:45 compute-0 ceph-mon[74327]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 06 09:50:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:45.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:50:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:50:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:50:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:47.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:50:47 compute-0 ceph-mon[74327]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:50:47 compute-0 sudo[153027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:50:47 compute-0 sudo[153027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:50:47 compute-0 sudo[153027]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:47 compute-0 sshd-session[153053]: Accepted publickey for zuul from 192.168.122.30 port 36022 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:50:47 compute-0 systemd-logind[795]: New session 52 of user zuul.
Dec 06 09:50:47 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 06 09:50:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd00c004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:47 compute-0 sshd-session[153053]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:50:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:47.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:50:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:48 compute-0 python3.9[153206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:50:49 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 06 09:50:49 compute-0 systemd[152463]: Activating special unit Exit the Session...
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped target Main User Target.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped target Basic System.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped target Paths.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped target Sockets.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped target Timers.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 09:50:49 compute-0 systemd[152463]: Closed D-Bus User Message Bus Socket.
Dec 06 09:50:49 compute-0 systemd[152463]: Stopped Create User's Volatile Files and Directories.
Dec 06 09:50:49 compute-0 systemd[152463]: Removed slice User Application Slice.
Dec 06 09:50:49 compute-0 systemd[152463]: Reached target Shutdown.
Dec 06 09:50:49 compute-0 systemd[152463]: Finished Exit the Session.
Dec 06 09:50:49 compute-0 systemd[152463]: Reached target Exit the Session.
Dec 06 09:50:49 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 06 09:50:49 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 06 09:50:49 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 09:50:49 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 09:50:49 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 09:50:49 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 09:50:49 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 06 09:50:49 compute-0 ceph-mon[74327]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:50:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:50 compute-0 sudo[153364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fegmwjoqtnzkcziyzlbweywhjeziuhpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014649.6232278-62-156912027218424/AnsiballZ_file.py'
Dec 06 09:50:50 compute-0 sudo[153364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:50 compute-0 python3.9[153366]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:50 compute-0 sudo[153364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:50 compute-0 ceph-mon[74327]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:50 compute-0 sudo[153519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nulzhrzslalcpmcaybpjoseulgcogafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014650.4037411-62-228386465593544/AnsiballZ_file.py'
Dec 06 09:50:50 compute-0 sudo[153519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:50 compute-0 python3.9[153521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:50 compute-0 sudo[153519]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:50:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:50:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:51 compute-0 sudo[153673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkrbmqdjovzsxgtcuzfjgddltchifntw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014651.0356872-62-159629694993362/AnsiballZ_file.py'
Dec 06 09:50:51 compute-0 sudo[153673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:51 compute-0 python3.9[153675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:51 compute-0 sudo[153673]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:51 compute-0 sudo[153825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knfealbkklxunfyargkepbzziifaasys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014651.6538103-62-255138338119247/AnsiballZ_file.py'
Dec 06 09:50:51 compute-0 sudo[153825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:52 compute-0 python3.9[153827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:52 compute-0 sudo[153825]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:52 compute-0 sudo[153977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmlycocxrqkttieyoyyentknhgsebmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014652.2492785-62-241592640178878/AnsiballZ_file.py'
Dec 06 09:50:52 compute-0 sudo[153977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:52 compute-0 python3.9[153979]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:52 compute-0 sudo[153977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:53 compute-0 ceph-mon[74327]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:50:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:50:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:50:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:54 compute-0 python3.9[154132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:50:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:55 compute-0 sudo[154283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frwuzcpwodwrmkovcpgmekqzsovndtrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014654.7084892-194-48164339735342/AnsiballZ_seboolean.py'
Dec 06 09:50:55 compute-0 sudo[154283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:50:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:55 compute-0 python3.9[154285]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 09:50:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:55 compute-0 ceph-mon[74327]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:56 compute-0 sudo[154283]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:56 compute-0 ceph-mon[74327]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:50:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:50:57.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:50:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:57 compute-0 python3.9[154440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:50:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:50:57 compute-0 python3.9[154562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014656.5362813-218-108000303438830/.source follow=False _original_basename=haproxy.j2 checksum=cc5e97ea900947bff0c19d73b88d99840e041f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:50:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:58 compute-0 ceph-mon[74327]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:50:58 compute-0 python3.9[154712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:50:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:59 compute-0 python3.9[154834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014658.2848246-263-4967866254114/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:50:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:50:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:50:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:50:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:50:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:50:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:50:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:50:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:50:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:50:59.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:50:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:00 compute-0 sudo[154985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upkusjfbyzyebbdfffdsmbhfdnispeir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014659.779452-314-182465331628124/AnsiballZ_setup.py'
Dec 06 09:51:00 compute-0 sudo[154985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:00 compute-0 python3.9[154987]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:51:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:00 compute-0 sudo[154985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:01 compute-0 sudo[155070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvofnazkvticwzteinrqrnymebmdtpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014659.779452-314-182465331628124/AnsiballZ_dnf.py'
Dec 06 09:51:01 compute-0 sudo[155070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:01 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:01.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 09:51:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:01.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 09:51:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:02 compute-0 ceph-mon[74327]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:02 compute-0 python3.9[155072]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:51:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:02 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:03 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:03.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:03 compute-0 ceph-mon[74327]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:03 compute-0 sudo[155070]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:04 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:04 compute-0 sudo[155153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:51:04 compute-0 sudo[155153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:04 compute-0 sudo[155153]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:04 compute-0 sudo[155178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:51:04 compute-0 sudo[155178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:04 compute-0 ceph-mon[74327]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:04 compute-0 sudo[155291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqqpyhqbhyvzqboqzlsgblhshaczfza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014664.1104944-350-156349474281185/AnsiballZ_systemd.py'
Dec 06 09:51:04 compute-0 sudo[155291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:05 compute-0 sudo[155178]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:51:05 compute-0 python3.9[155295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:51:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:05 compute-0 sudo[155311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:51:05 compute-0 sudo[155311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:05 compute-0 sudo[155311]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:05 compute-0 sudo[155339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:51:05 compute-0 sudo[155339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:05 compute-0 sudo[155291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:05 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:05 compute-0 podman[155503]: 2025-12-06 09:51:05.719432937 +0000 UTC m=+0.023454384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:05.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:05 compute-0 podman[155503]: 2025-12-06 09:51:05.898221333 +0000 UTC m=+0.202242750 container create 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:51:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:51:05 compute-0 systemd[1]: Started libpod-conmon-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope.
Dec 06 09:51:05 compute-0 python3.9[155567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:06 compute-0 podman[155503]: 2025-12-06 09:51:06.029702202 +0000 UTC m=+0.333723639 container init 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:51:06 compute-0 podman[155503]: 2025-12-06 09:51:06.040562825 +0000 UTC m=+0.344584252 container start 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:51:06 compute-0 podman[155503]: 2025-12-06 09:51:06.044687447 +0000 UTC m=+0.348708924 container attach 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:51:06 compute-0 kind_kalam[155570]: 167 167
Dec 06 09:51:06 compute-0 systemd[1]: libpod-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope: Deactivated successfully.
Dec 06 09:51:06 compute-0 podman[155503]: 2025-12-06 09:51:06.049611129 +0000 UTC m=+0.353633646 container died 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c7b13b9ca1b9c128a320206c8f9b9e605251561786b03998010b8ec4b1e900-merged.mount: Deactivated successfully.
Dec 06 09:51:06 compute-0 podman[155503]: 2025-12-06 09:51:06.101392697 +0000 UTC m=+0.405414124 container remove 8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 09:51:06 compute-0 systemd[1]: libpod-conmon-8f7ae9c1e61ad9b63d0c8676f4680dc64463989aebbc31bf1fc3c48d45ad3e36.scope: Deactivated successfully.
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.308774955 +0000 UTC m=+0.063381022 container create 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:51:06 compute-0 systemd[1]: Started libpod-conmon-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope.
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.28005199 +0000 UTC m=+0.034658107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:06 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.412128835 +0000 UTC m=+0.166734922 container init 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.420753947 +0000 UTC m=+0.175360034 container start 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.426434511 +0000 UTC m=+0.181040578 container attach 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:51:06 compute-0 python3.9[155735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014665.5305347-374-258996016499359/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:06 compute-0 jovial_lamport[155726]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:51:06 compute-0 jovial_lamport[155726]: --> All data devices are unavailable
Dec 06 09:51:06 compute-0 systemd[1]: libpod-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope: Deactivated successfully.
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.800126567 +0000 UTC m=+0.554732644 container died 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8c308df543cbbdce58190139094515a3cddc2d24222f0b44ff1609b2b547c1-merged.mount: Deactivated successfully.
Dec 06 09:51:06 compute-0 podman[155660]: 2025-12-06 09:51:06.853748774 +0000 UTC m=+0.608354841 container remove 3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:51:06 compute-0 systemd[1]: libpod-conmon-3e54e5c6708e882b6d3debf24186d5c8db9d636f66f15e731b6075b128aef502.scope: Deactivated successfully.
Dec 06 09:51:06 compute-0 sudo[155339]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:06 compute-0 sudo[155856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:51:06 compute-0 sudo[155856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:06 compute-0 ceph-mon[74327]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:06 compute-0 sudo[155856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:07.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:51:07 compute-0 sudo[155906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:51:07 compute-0 sudo[155906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:07 compute-0 python3.9[155955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028003370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:07 compute-0 sudo[155958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:51:07 compute-0 sudo[155958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:07 compute-0 sudo[155958]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.480111601 +0000 UTC m=+0.062712043 container create 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:51:07 compute-0 systemd[1]: Started libpod-conmon-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope.
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.447154042 +0000 UTC m=+0.029754524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.571414956 +0000 UTC m=+0.154015408 container init 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.581431056 +0000 UTC m=+0.164031508 container start 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.586943195 +0000 UTC m=+0.169543657 container attach 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:51:07 compute-0 keen_pascal[156134]: 167 167
Dec 06 09:51:07 compute-0 systemd[1]: libpod-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope: Deactivated successfully.
Dec 06 09:51:07 compute-0 conmon[156134]: conmon 26381ce42a8fcc9ab3bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope/container/memory.events
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.591140089 +0000 UTC m=+0.173740551 container died 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:51:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:07 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-566a58bf5e9430b627fabe786dce181fa97acdb7e394d43b025ea49de953a262-merged.mount: Deactivated successfully.
Dec 06 09:51:07 compute-0 podman[156091]: 2025-12-06 09:51:07.639498284 +0000 UTC m=+0.222098706 container remove 26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_pascal, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:51:07 compute-0 systemd[1]: libpod-conmon-26381ce42a8fcc9ab3bf7051a67d750eb1f7f4c835abb53c03d59b7634e90774.scope: Deactivated successfully.
Dec 06 09:51:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:07.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:07 compute-0 podman[156184]: 2025-12-06 09:51:07.840278493 +0000 UTC m=+0.050857224 container create 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:51:07 compute-0 python3.9[156172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014666.726728-374-19971812704897/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:07 compute-0 systemd[1]: Started libpod-conmon-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope.
Dec 06 09:51:07 compute-0 podman[156184]: 2025-12-06 09:51:07.813407278 +0000 UTC m=+0.023986029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:07 compute-0 podman[156184]: 2025-12-06 09:51:07.933975112 +0000 UTC m=+0.144553863 container init 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:51:07 compute-0 podman[156184]: 2025-12-06 09:51:07.942250605 +0000 UTC m=+0.152829366 container start 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:51:07 compute-0 podman[156184]: 2025-12-06 09:51:07.946313505 +0000 UTC m=+0.156892256 container attach 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:51:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:08 compute-0 objective_cerf[156200]: {
Dec 06 09:51:08 compute-0 objective_cerf[156200]:     "1": [
Dec 06 09:51:08 compute-0 objective_cerf[156200]:         {
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "devices": [
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "/dev/loop3"
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             ],
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "lv_name": "ceph_lv0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "lv_size": "21470642176",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "name": "ceph_lv0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "tags": {
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.cluster_name": "ceph",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.crush_device_class": "",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.encrypted": "0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.osd_id": "1",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.type": "block",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.vdo": "0",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:                 "ceph.with_tpm": "0"
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             },
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "type": "block",
Dec 06 09:51:08 compute-0 objective_cerf[156200]:             "vg_name": "ceph_vg0"
Dec 06 09:51:08 compute-0 objective_cerf[156200]:         }
Dec 06 09:51:08 compute-0 objective_cerf[156200]:     ]
Dec 06 09:51:08 compute-0 objective_cerf[156200]: }
Dec 06 09:51:08 compute-0 systemd[1]: libpod-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope: Deactivated successfully.
Dec 06 09:51:08 compute-0 podman[156184]: 2025-12-06 09:51:08.288712958 +0000 UTC m=+0.499291719 container died 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c94461bfb085ae3d9001349034dd322f135b980b10b28389dd771a72c7bc4f6e-merged.mount: Deactivated successfully.
Dec 06 09:51:08 compute-0 podman[156184]: 2025-12-06 09:51:08.337144145 +0000 UTC m=+0.547722886 container remove 2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:51:08 compute-0 systemd[1]: libpod-conmon-2f70401f60e59f7b178c8a09f43e07327ba0312970095a342f5e1dcfe65f25a0.scope: Deactivated successfully.
Dec 06 09:51:08 compute-0 sudo[155906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:08 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd004002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:08 compute-0 ovn_controller[152417]: 2025-12-06T09:51:08Z|00025|memory|INFO|16384 kB peak resident set size after 29.5 seconds
Dec 06 09:51:08 compute-0 ovn_controller[152417]: 2025-12-06T09:51:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Dec 06 09:51:08 compute-0 sudo[156254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:51:08 compute-0 sudo[156254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:08 compute-0 sudo[156254]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:08 compute-0 podman[156240]: 2025-12-06 09:51:08.471929813 +0000 UTC m=+0.111682056 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 09:51:08 compute-0 sudo[156292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:51:08 compute-0 sudo[156292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:08 compute-0 podman[156382]: 2025-12-06 09:51:08.907854339 +0000 UTC m=+0.043586507 container create ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:51:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:51:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:08 compute-0 systemd[1]: Started libpod-conmon-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope.
Dec 06 09:51:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:08 compute-0 podman[156382]: 2025-12-06 09:51:08.886219345 +0000 UTC m=+0.021951503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:08 compute-0 podman[156382]: 2025-12-06 09:51:08.994785526 +0000 UTC m=+0.130517704 container init ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:51:09 compute-0 podman[156382]: 2025-12-06 09:51:09.006311877 +0000 UTC m=+0.142044005 container start ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:51:09 compute-0 podman[156382]: 2025-12-06 09:51:09.009619706 +0000 UTC m=+0.145351834 container attach ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 09:51:09 compute-0 frosty_moore[156427]: 167 167
Dec 06 09:51:09 compute-0 systemd[1]: libpod-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope: Deactivated successfully.
Dec 06 09:51:09 compute-0 podman[156382]: 2025-12-06 09:51:09.017684304 +0000 UTC m=+0.153416432 container died ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-702f57b851827087fff9dd6485833481e9ebbb235f4e8071dbc63ec578b2dae2-merged.mount: Deactivated successfully.
Dec 06 09:51:09 compute-0 podman[156382]: 2025-12-06 09:51:09.056618245 +0000 UTC m=+0.192350373 container remove ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 09:51:09 compute-0 systemd[1]: libpod-conmon-ab7851e5247f6a17d5e4223f9538d0047ab3b590a961f67a89c7ec260f66ef0b.scope: Deactivated successfully.
Dec 06 09:51:09 compute-0 ceph-mon[74327]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:09 compute-0 podman[156523]: 2025-12-06 09:51:09.238571786 +0000 UTC m=+0.053270659 container create ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:51:09 compute-0 systemd[1]: Started libpod-conmon-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope.
Dec 06 09:51:09 compute-0 podman[156523]: 2025-12-06 09:51:09.214303341 +0000 UTC m=+0.029002234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:51:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:09 compute-0 podman[156523]: 2025-12-06 09:51:09.341884875 +0000 UTC m=+0.156583768 container init ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:51:09 compute-0 podman[156523]: 2025-12-06 09:51:09.348791611 +0000 UTC m=+0.163490484 container start ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:51:09 compute-0 podman[156523]: 2025-12-06 09:51:09.351952286 +0000 UTC m=+0.166651179 container attach ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:51:09 compute-0 python3.9[156529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:09 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 09:51:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:09.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 09:51:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:09 compute-0 python3.9[156706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014668.8785079-506-100980744621150/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:10 compute-0 lvm[156768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:51:10 compute-0 lvm[156768]: VG ceph_vg0 finished
Dec 06 09:51:10 compute-0 hungry_jones[156546]: {}
Dec 06 09:51:10 compute-0 systemd[1]: libpod-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Deactivated successfully.
Dec 06 09:51:10 compute-0 systemd[1]: libpod-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Consumed 1.328s CPU time.
Dec 06 09:51:10 compute-0 podman[156523]: 2025-12-06 09:51:10.198114986 +0000 UTC m=+1.012813879 container died ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a0527515ccc4f4f99f33918906bcb9088064fd791da3e6c1e2e52a8bc3c2376-merged.mount: Deactivated successfully.
Dec 06 09:51:10 compute-0 podman[156523]: 2025-12-06 09:51:10.259468462 +0000 UTC m=+1.074167365 container remove ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 06 09:51:10 compute-0 systemd[1]: libpod-conmon-ed4c382937db5f937e2ab87e268e49afca896c085c7ad00c7503bbe3e4fd8096.scope: Deactivated successfully.
Dec 06 09:51:10 compute-0 sudo[156292]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:51:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:51:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:10 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:10 compute-0 sudo[156888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:51:10 compute-0 sudo[156888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:10 compute-0 sudo[156888]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:10 compute-0 python3.9[156924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:11 compute-0 python3.9[157052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014670.1168706-506-162717034982160/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:11 compute-0 ceph-mon[74327]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:11 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:11 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:51:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:11 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:11.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:11 compute-0 python3.9[157203]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:51:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:12 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:12 compute-0 sudo[157355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muwexolgbvupsvgsjfipteamlbyhxohe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014672.286394-620-221412746822121/AnsiballZ_file.py'
Dec 06 09:51:12 compute-0 sudo[157355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:12 compute-0 python3.9[157357]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:12 compute-0 sudo[157355]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:13 compute-0 ceph-mon[74327]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:13 compute-0 sudo[157509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbzzcoewderdvlwbafdwuulozzfkquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014673.201327-644-219155545053778/AnsiballZ_stat.py'
Dec 06 09:51:13 compute-0 sudo[157509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:13 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:13.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:13 compute-0 python3.9[157511]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:13 compute-0 sudo[157509]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:14 compute-0 sudo[157587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moczepzeawgkmmdyhsmhnuqswrmaemsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014673.201327-644-219155545053778/AnsiballZ_file.py'
Dec 06 09:51:14 compute-0 sudo[157587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:14 compute-0 python3.9[157589]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:14 compute-0 sudo[157587]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:14 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0180034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:14 compute-0 sudo[157739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydugquvvupbytlvnapxdaemzsqumltbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014674.430677-644-246598295627023/AnsiballZ_stat.py'
Dec 06 09:51:14 compute-0 sudo[157739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:14 compute-0 python3.9[157741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:15 compute-0 sudo[157739]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:15 compute-0 sudo[157819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwwopntsmzopxoksvrsgxaavoukqawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014674.430677-644-246598295627023/AnsiballZ_file.py'
Dec 06 09:51:15 compute-0 sudo[157819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:15 compute-0 ceph-mon[74327]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:15 compute-0 python3.9[157821]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:15 compute-0 sudo[157819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:15 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:16 compute-0 sudo[157971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlpxgnwepgnzpxgknitlfdixnmtiqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014675.7329295-713-127240448948476/AnsiballZ_file.py'
Dec 06 09:51:16 compute-0 sudo[157971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:16 compute-0 python3.9[157973]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:16 compute-0 sudo[157971]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:16 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:16 compute-0 sudo[158123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjtnqvrvceycolqumfhapdyyvulrlkoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014676.4963064-737-37627231908855/AnsiballZ_stat.py'
Dec 06 09:51:16 compute-0 sudo[158123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:51:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:51:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:17.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:51:17 compute-0 python3.9[158126]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:17 compute-0 sudo[158123]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003500 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:17 compute-0 ceph-mon[74327]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:17 compute-0 sudo[158203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntkmtaqzraqxafbigkmayhsivntiuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014676.4963064-737-37627231908855/AnsiballZ_file.py'
Dec 06 09:51:17 compute-0 sudo[158203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:17 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:17 compute-0 python3.9[158205]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:17 compute-0 sudo[158203]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:17.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:18 compute-0 sudo[158355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckswbdsuykxzcsagygjablycahmzlqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014678.1056612-773-249190696047620/AnsiballZ_stat.py'
Dec 06 09:51:18 compute-0 sudo[158355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:18 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:18 compute-0 python3.9[158357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:18 compute-0 sudo[158355]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:18 compute-0 sudo[158433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhrptpmndkworivwmzunzmoswrbqbcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014678.1056612-773-249190696047620/AnsiballZ_file.py'
Dec 06 09:51:18 compute-0 sudo[158433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:19 compute-0 python3.9[158435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:19 compute-0 sudo[158433]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:19 compute-0 ceph-mon[74327]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:19 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:19 compute-0 sudo[158587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjcjdhapgtbaitqrbcuiwoiymxegzot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014679.3650055-809-117114677746285/AnsiballZ_systemd.py'
Dec 06 09:51:19 compute-0 sudo[158587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:19 compute-0 python3.9[158589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:51:19 compute-0 systemd[1]: Reloading.
Dec 06 09:51:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:20 compute-0 systemd-rc-local-generator[158618]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:51:20 compute-0 systemd-sysv-generator[158621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:51:20 compute-0 sudo[158587]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:20 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:20 compute-0 sudo[158776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptmzbssckoqwzmefvauozjhfrrtfkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014680.586787-833-51608797400627/AnsiballZ_stat.py'
Dec 06 09:51:20 compute-0 sudo[158776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:21 compute-0 python3.9[158778]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:21 compute-0 sudo[158776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:21 compute-0 sudo[158856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtiilexdofmjeppovlcpefjlicwthhgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014680.586787-833-51608797400627/AnsiballZ_file.py'
Dec 06 09:51:21 compute-0 sudo[158856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:21 compute-0 ceph-mon[74327]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:21 compute-0 python3.9[158858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:21 compute-0 sudo[158856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:21 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:22 compute-0 sudo[159010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbluwzcmoyutfizejammfzjmzixobqdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014681.8210895-869-255201647585983/AnsiballZ_stat.py'
Dec 06 09:51:22 compute-0 sudo[159010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:22 compute-0 python3.9[159012]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:22 compute-0 sudo[159010]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:22 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:22 compute-0 ceph-mon[74327]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:22 compute-0 sudo[159088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzczgexjympkvqkffswcmnhowfdushcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014681.8210895-869-255201647585983/AnsiballZ_file.py'
Dec 06 09:51:22 compute-0 sudo[159088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:22 compute-0 python3.9[159090]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:22 compute-0 sudo[159088]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:23 compute-0 sudo[159242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxlolpwlylrntmtypcrsgdsvozyuqaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014683.0241578-905-237814775888563/AnsiballZ_systemd.py'
Dec 06 09:51:23 compute-0 sudo[159242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:23 compute-0 python3.9[159244]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:51:23 compute-0 systemd[1]: Reloading.
Dec 06 09:51:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:23 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:23 compute-0 systemd-rc-local-generator[159270]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:51:23 compute-0 systemd-sysv-generator[159273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:51:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:23.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:51:23
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.mgr', '.nfs', 'default.rgw.meta', 'vms', 'backups']
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:51:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:23.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:51:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:23 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:24 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:51:24 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:51:24 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:51:24 compute-0 sudo[159242]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:51:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:51:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:24 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:24 compute-0 sudo[159435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staqtbbdpqajytsnsycbhlguokkpheht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014684.3920264-935-229205706081344/AnsiballZ_file.py'
Dec 06 09:51:24 compute-0 sudo[159435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:24 compute-0 python3.9[159437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:24 compute-0 sudo[159435]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:24 compute-0 ceph-mon[74327]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:25 compute-0 sudo[159589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baedlbynvlrtckfqamgnyqydkhwbtzkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014685.1067019-959-90830669304176/AnsiballZ_stat.py'
Dec 06 09:51:25 compute-0 sudo[159589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:25 compute-0 python3.9[159591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:25 compute-0 sudo[159589]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:25 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:25 compute-0 sudo[159712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgblcukchpqgnqoduulqdvzzyuzjteei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014685.1067019-959-90830669304176/AnsiballZ_copy.py'
Dec 06 09:51:25 compute-0 sudo[159712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:26 compute-0 python3.9[159714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014685.1067019-959-90830669304176/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:26 compute-0 sudo[159712]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:26 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:27.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:51:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:27.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:51:27 compute-0 sudo[159865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqebtpobztoprpvkxurpvfktzrmbpzdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014686.7209287-1010-106350643659895/AnsiballZ_file.py'
Dec 06 09:51:27 compute-0 sudo[159865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:27 compute-0 ceph-mon[74327]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:27 compute-0 python3.9[159867]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:51:27 compute-0 sudo[159865]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:27 compute-0 sudo[159869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:51:27 compute-0 sudo[159869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:27 compute-0 sudo[159869]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:27 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:27 compute-0 sudo[160043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziugvfslqbuulccyxrcdgfmbhhqmacqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014687.4603326-1034-250211857621147/AnsiballZ_stat.py'
Dec 06 09:51:27 compute-0 sudo[160043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:27 compute-0 python3.9[160045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:51:27 compute-0 sudo[160043]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:28 compute-0 sudo[160166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odobudztqmvxwghizxwktgdjwrvszqxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014687.4603326-1034-250211857621147/AnsiballZ_copy.py'
Dec 06 09:51:28 compute-0 sudo[160166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:28 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:28 compute-0 python3.9[160168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014687.4603326-1034-250211857621147/.source.json _original_basename=.b_nkeeh9 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:28 compute-0 sudo[160166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:28 compute-0 sudo[160318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpsdfcjstqbvjntnlcdcjvxrvbdqrih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014688.6558812-1079-119524077940903/AnsiballZ_file.py'
Dec 06 09:51:28 compute-0 sudo[160318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:29 compute-0 python3.9[160320]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:29 compute-0 ceph-mon[74327]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:29 compute-0 sudo[160318]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:29 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:29 compute-0 sudo[160472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjcofbjgnlqvwshqjafhavaxlcgsipc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014689.468372-1103-104254606514261/AnsiballZ_stat.py'
Dec 06 09:51:29 compute-0 sudo[160472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:29 compute-0 sudo[160472]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:30 compute-0 sudo[160595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mywzihltoclishyclpiltvvfoinxrfec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014689.468372-1103-104254606514261/AnsiballZ_copy.py'
Dec 06 09:51:30 compute-0 sudo[160595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:30 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:30 compute-0 sudo[160595]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:51:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:51:31 compute-0 ceph-mon[74327]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:31 compute-0 sudo[160749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctwubpsgsehfgnvbmnqisqvkthxryfok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014690.993254-1154-61960368734916/AnsiballZ_container_config_data.py'
Dec 06 09:51:31 compute-0 sudo[160749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:31 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:31 compute-0 python3.9[160751]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 06 09:51:31 compute-0 sudo[160749]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:31.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:32 compute-0 sudo[160901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrlpdnyrbrysqigcgfintzlusnofpml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014691.903129-1181-202724969370141/AnsiballZ_container_config_hash.py'
Dec 06 09:51:32 compute-0 sudo[160901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:32 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:32 compute-0 python3.9[160903]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:51:32 compute-0 sudo[160901]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:33 compute-0 ceph-mon[74327]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:33 compute-0 sudo[161055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinoswyuevvabxjhdyvrdtwvmpdrzzyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014692.8107193-1208-248674579567667/AnsiballZ_podman_container_info.py'
Dec 06 09:51:33 compute-0 sudo[161055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:33 compute-0 python3.9[161057]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:51:33 compute-0 sudo[161055]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:33 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300022c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:34 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:35 compute-0 ceph-mon[74327]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:35 compute-0 sudo[161235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxzdfmpmuaylazvppeztqutphamanak ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014694.6675632-1247-2437688271653/AnsiballZ_edpm_container_manage.py'
Dec 06 09:51:35 compute-0 sudo[161235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:35 compute-0 python3[161237]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:51:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:35 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:35.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:36 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0300022e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:51:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:51:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:37.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:51:37 compute-0 ceph-mon[74327]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:37 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:37.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:38 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:51:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:39 compute-0 ceph-mon[74327]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:39 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:40 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:41 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:41 compute-0 ceph-mon[74327]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:42 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:42 compute-0 podman[161321]: 2025-12-06 09:51:42.540196612 +0000 UTC m=+3.358865903 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 06 09:51:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:43 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:44 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:45 compute-0 ceph-mon[74327]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:45 compute-0 podman[161253]: 2025-12-06 09:51:45.617557307 +0000 UTC m=+10.031679506 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 09:51:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:45 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:45 compute-0 podman[161411]: 2025-12-06 09:51:45.815020327 +0000 UTC m=+0.089804285 container create ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:45 compute-0 podman[161411]: 2025-12-06 09:51:45.745347846 +0000 UTC m=+0.020131814 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 09:51:45 compute-0 python3[161237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 09:51:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:45 compute-0 sudo[161235]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:46 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:46 compute-0 ceph-mon[74327]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:47.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:51:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:47 compute-0 sudo[161476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:51:47 compute-0 sudo[161476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:51:47 compute-0 sudo[161476]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:47 compute-0 ceph-mon[74327]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:47 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:47 compute-0 sudo[161626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmfvbncelxacwdotnteccbztcekikmrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014707.4109752-1271-210644363834944/AnsiballZ_stat.py'
Dec 06 09:51:47 compute-0 sudo[161626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:47.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:47 compute-0 python3.9[161628]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:51:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:47.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:47 compute-0 sudo[161626]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:48 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:48 compute-0 sudo[161780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xukemgjugfusjhxdolgqvuillyymdcyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014708.202222-1298-20313109603513/AnsiballZ_file.py'
Dec 06 09:51:48 compute-0 sudo[161780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:48 compute-0 python3.9[161782]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:48 compute-0 sudo[161780]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:48 compute-0 sudo[161856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aagaqyqdfgdebofojmmtnlyjsqwufcxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014708.202222-1298-20313109603513/AnsiballZ_stat.py'
Dec 06 09:51:48 compute-0 sudo[161856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:49 compute-0 python3.9[161858]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:51:49 compute-0 sudo[161856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:49 compute-0 ceph-mon[74327]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:49 compute-0 sudo[162009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuepvvmppeaijuhnyahnstzmpwpwxajc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014709.150902-1298-182060556066640/AnsiballZ_copy.py'
Dec 06 09:51:49 compute-0 sudo[162009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:49 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:49 compute-0 python3.9[162011]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014709.150902-1298-182060556066640/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:49.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:49 compute-0 sudo[162009]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:50 compute-0 sudo[162085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtueojdzspxtmrjshvdpvfsvfkijxjok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014709.150902-1298-182060556066640/AnsiballZ_systemd.py'
Dec 06 09:51:50 compute-0 sudo[162085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:50 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:50 compute-0 python3.9[162087]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:51:50 compute-0 systemd[1]: Reloading.
Dec 06 09:51:50 compute-0 systemd-rc-local-generator[162109]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:51:50 compute-0 systemd-sysv-generator[162114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:51:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Dec 06 09:51:50 compute-0 sudo[162085]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:51 compute-0 ceph-mon[74327]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:51 compute-0 sudo[162202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buytvizzuandkydcqaledngsjxknpifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014709.150902-1298-182060556066640/AnsiballZ_systemd.py'
Dec 06 09:51:51 compute-0 sudo[162202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:51 compute-0 python3.9[162205]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:51:51 compute-0 systemd[1]: Reloading.
Dec 06 09:51:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:51 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0040040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:51 compute-0 systemd-sysv-generator[162238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:51:51 compute-0 systemd-rc-local-generator[162235]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:51:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:51.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:51 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 06 09:51:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50d8f04ba2ccac2f396f3c4fed03e6d5841af9ca74d7e8187f560f79e9437d8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50d8f04ba2ccac2f396f3c4fed03e6d5841af9ca74d7e8187f560f79e9437d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:51:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2.
Dec 06 09:51:52 compute-0 podman[162246]: 2025-12-06 09:51:52.089883229 +0000 UTC m=+0.141371580 container init ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + sudo -E kolla_set_configs
Dec 06 09:51:52 compute-0 podman[162246]: 2025-12-06 09:51:52.117248436 +0000 UTC m=+0.168736767 container start ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 09:51:52 compute-0 edpm-start-podman-container[162246]: ovn_metadata_agent
Dec 06 09:51:52 compute-0 podman[162268]: 2025-12-06 09:51:52.196167903 +0000 UTC m=+0.064095928 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 09:51:52 compute-0 edpm-start-podman-container[162245]: Creating additional drop-in dependency for "ovn_metadata_agent" (ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2)
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Validating config file
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Copying service configuration files
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Writing out command to execute
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: ++ cat /run_command
Dec 06 09:51:52 compute-0 systemd[1]: Reloading.
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + CMD=neutron-ovn-metadata-agent
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + ARGS=
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + sudo kolla_copy_cacerts
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + [[ ! -n '' ]]
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + . kolla_extend_start
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: Running command: 'neutron-ovn-metadata-agent'
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + umask 0022
Dec 06 09:51:52 compute-0 ovn_metadata_agent[162262]: + exec neutron-ovn-metadata-agent
Dec 06 09:51:52 compute-0 systemd-rc-local-generator[162335]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:51:52 compute-0 systemd-sysv-generator[162342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:51:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:52 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:52 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 06 09:51:52 compute-0 sudo[162202]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:52 compute-0 sshd-session[153056]: Connection closed by 192.168.122.30 port 36022
Dec 06 09:51:52 compute-0 sshd-session[153053]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:51:52 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 06 09:51:52 compute-0 systemd[1]: session-52.scope: Consumed 57.782s CPU time.
Dec 06 09:51:52 compute-0 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Dec 06 09:51:52 compute-0 systemd-logind[795]: Removed session 52.
Dec 06 09:51:53 compute-0 ceph-mon[74327]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:53 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:53.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:51:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:51:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:51:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 INFO neutron.common.config [-] Logging enabled!
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.189 162267 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.190 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.191 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.192 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.193 162267 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.194 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.195 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.196 162267 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.197 162267 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.198 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.199 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.200 162267 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.201 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.202 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.203 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.204 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.205 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.206 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.207 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.208 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.209 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.210 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.211 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.212 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.213 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.214 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.215 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.216 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.217 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.218 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.219 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.220 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.221 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.222 162267 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.230 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.231 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.244 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d39b5be8-d4cf-41c7-9a64-1ee03801f4e1 (UUID: d39b5be8-d4cf-41c7-9a64-1ee03801f4e1) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.265 162267 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.266 162267 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.269 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.275 162267 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.281 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd39b5be8-d4cf-41c7-9a64-1ee03801f4e1'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], external_ids={}, name=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, nb_cfg_timestamp=1765014646989, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.282 162267 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f70c2851f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.282 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.283 162267 INFO oslo_service.service [-] Starting 1 workers
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.287 162267 DEBUG oslo_service.service [-] Started child 162380 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.290 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp551o2lw7/privsep.sock']
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.291 162380 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-168927'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.314 162380 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.315 162380 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.315 162380 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.318 162380 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.326 162380 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.332 162380 INFO eventlet.wsgi.server [-] (162380) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 06 09:51:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:51:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:54 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:54 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.986 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.987 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp551o2lw7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.855 162385 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.859 162385 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.861 162385 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.861 162385 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162385
Dec 06 09:51:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:54.991 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[faedbc8a-40e9-4699-81b3-e9ab199645c2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:51:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:55 compute-0 ceph-mon[74327]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:55 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:51:55 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:51:55 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:55.522 162385 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:51:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:55 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:51:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.049 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[84c5dbab-1250-4578-a479-abb41fe6ac9e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.052 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, column=external_ids, values=({'neutron:ovn-metadata-id': '765394bf-011d-5efb-b5d8-c10778dc40f3'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.062 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.072 162267 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.073 162267 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.074 162267 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.075 162267 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.076 162267 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.077 162267 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.078 162267 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.079 162267 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.080 162267 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.081 162267 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.082 162267 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.083 162267 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.084 162267 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.085 162267 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.086 162267 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.087 162267 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.088 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.089 162267 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.090 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.091 162267 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.092 162267 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.093 162267 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.094 162267 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.095 162267 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.096 162267 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.097 162267 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.098 162267 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.099 162267 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.100 162267 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.101 162267 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.102 162267 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.103 162267 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.104 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.105 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.106 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.107 162267 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:51:56 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:51:56.108 162267 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:51:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:56 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:56 compute-0 ceph-mon[74327]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:51:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:51:57.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:51:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:57 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd030004f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:51:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:51:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:58 compute-0 sshd-session[162394]: Accepted publickey for zuul from 192.168.122.30 port 36678 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:51:58 compute-0 systemd-logind[795]: New session 53 of user zuul.
Dec 06 09:51:58 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 06 09:51:58 compute-0 sshd-session[162394]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:51:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:58 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd018004e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:59 compute-0 ceph-mon[74327]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:51:59 compute-0 python3.9[162547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:51:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcffc00b810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:51:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:51:59 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:51:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:51:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:51:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:51:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:51:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:51:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:51:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:00 compute-0 sudo[162703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgswwypjbfwmkbxyjggmjwctepzqrybh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014719.968349-62-103432440192438/AnsiballZ_command.py'
Dec 06 09:52:00 compute-0 kernel: ganesha.nfsd[149935]: segfault at 50 ip 00007fd0e208032e sp 00007fd095ffa210 error 4 in libntirpc.so.5.8[7fd0e2065000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 06 09:52:00 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:52:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[126299]: 06/12/2025 09:52:00 : epoch 6933fbba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd028002f50 fd 39 proxy ignored for local
Dec 06 09:52:00 compute-0 sudo[162703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec 06 09:52:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 06 09:52:00 compute-0 systemd[1]: Started Process Core Dump (PID 162705/UID 0).
Dec 06 09:52:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 06 09:52:00 compute-0 python3.9[162706]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:00 compute-0 sudo[162703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:52:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:52:01 compute-0 ceph-mon[74327]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:01 compute-0 sudo[162873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmfxiuinylixphdnxatwfncauhhztlrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014721.1330016-95-241311308561520/AnsiballZ_systemd_service.py'
Dec 06 09:52:01 compute-0 sudo[162873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:01.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:02 compute-0 python3.9[162875]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:52:02 compute-0 systemd[1]: Reloading.
Dec 06 09:52:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:02 compute-0 systemd-rc-local-generator[162902]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:52:02 compute-0 systemd-sysv-generator[162905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:52:02 compute-0 sudo[162873]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:03 compute-0 systemd-coredump[162707]: Process 126373 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007fd0e208032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:52:03 compute-0 ceph-mon[74327]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:03 compute-0 systemd[1]: systemd-coredump@3-162705-0.service: Deactivated successfully.
Dec 06 09:52:03 compute-0 systemd[1]: systemd-coredump@3-162705-0.service: Consumed 1.254s CPU time.
Dec 06 09:52:03 compute-0 podman[162994]: 2025-12-06 09:52:03.344113086 +0000 UTC m=+0.031574012 container died 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dffa55875199467cd3d27a66b7cd46e7988a0483df9beb3d1dd985935856704-merged.mount: Deactivated successfully.
Dec 06 09:52:03 compute-0 podman[162994]: 2025-12-06 09:52:03.399751394 +0000 UTC m=+0.087212300 container remove 0680872db78f4539de9816e63fe0e26e1ab0f0389d421d932e29ec3f87531d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:52:03 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:52:03 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:52:03 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.067s CPU time.
Dec 06 09:52:03 compute-0 python3.9[163110]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:52:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:03 compute-0 network[163127]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:52:03 compute-0 network[163128]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:52:03 compute-0 network[163129]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:52:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:52:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:52:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec 06 09:52:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:05 compute-0 ceph-mon[74327]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec 06 09:52:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec 06 09:52:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:07.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:07.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:07 compute-0 sudo[163268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:52:07 compute-0 sudo[163268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:07 compute-0 sudo[163268]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:07 compute-0 ceph-mon[74327]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Dec 06 09:52:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095208 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:52:08 compute-0 ceph-mon[74327]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:08 compute-0 sudo[163418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzmqlboziaucsmukacwynoggtuhertzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014728.478935-152-47467970958870/AnsiballZ_systemd_service.py'
Dec 06 09:52:08 compute-0 sudo[163418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:52:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:09 compute-0 python3.9[163420]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:09 compute-0 sudo[163418]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:09 compute-0 sudo[163573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzatevhacolglfvinpcynuwklrffcmqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014729.238692-152-232127724618739/AnsiballZ_systemd_service.py'
Dec 06 09:52:09 compute-0 sudo[163573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:52:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:52:09 compute-0 python3.9[163575]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:09 compute-0 sudo[163573]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:52:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:52:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:10 compute-0 sudo[163726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnotzscnscoqkyksxtigqgflxoqgnzpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014729.9947813-152-16926356886729/AnsiballZ_systemd_service.py'
Dec 06 09:52:10 compute-0 sudo[163726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:10 compute-0 python3.9[163728]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:10 compute-0 sudo[163726]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:10 compute-0 ceph-mon[74327]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:10 compute-0 sudo[163730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:52:10 compute-0 sudo[163730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:10 compute-0 sudo[163730]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:10 compute-0 sudo[163779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:52:10 compute-0 sudo[163779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:52:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:52:11 compute-0 sudo[163946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjnefygefcvojbxcvkrfnhjbitpkwexk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014730.7982752-152-94937584478634/AnsiballZ_systemd_service.py'
Dec 06 09:52:11 compute-0 sudo[163946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:11 compute-0 sudo[163779]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:11 compute-0 python3.9[163949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:52:11 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:52:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:52:11 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:52:11 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:52:11 compute-0 sudo[163946]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:11 compute-0 sudo[164117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmdsyxclvmlkakaoiscxhicwpnucshcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014731.5338776-152-111953563685769/AnsiballZ_systemd_service.py'
Dec 06 09:52:11 compute-0 sudo[164117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:12 compute-0 python3.9[164119]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:12 compute-0 sudo[164117]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:52:12 compute-0 sudo[164270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnnprpdexvfcahdnfjgunlgkkvexuqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014732.5040596-152-199181720135004/AnsiballZ_systemd_service.py'
Dec 06 09:52:12 compute-0 sudo[164270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:13 compute-0 python3.9[164272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:13 compute-0 sudo[164270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:52:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:52:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:13 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 4.
Dec 06 09:52:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:52:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:52:13 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:52:13 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.067s CPU time.
Dec 06 09:52:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:52:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:52:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:52:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:52:13 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:52:13 compute-0 sudo[164425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfewkdkngjjweyrwwxveftconxoqwnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014733.3276498-152-30917339827942/AnsiballZ_systemd_service.py'
Dec 06 09:52:13 compute-0 sudo[164425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:13 compute-0 sudo[164427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:52:13 compute-0 sudo[164427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:13 compute-0 sudo[164427]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:13 compute-0 sudo[164462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:52:13 compute-0 sudo[164462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:13 compute-0 podman[164524]: 2025-12-06 09:52:13.85077911 +0000 UTC m=+0.026349540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:13 compute-0 python3.9[164441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:52:13 compute-0 sudo[164425]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:14 compute-0 podman[164524]: 2025-12-06 09:52:14.210867593 +0000 UTC m=+0.386437973 container create 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:14 compute-0 podman[164524]: 2025-12-06 09:52:14.47021956 +0000 UTC m=+0.645790030 container init 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:52:14 compute-0 podman[164524]: 2025-12-06 09:52:14.4813444 +0000 UTC m=+0.656914820 container start 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:52:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:14 compute-0 ceph-mon[74327]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:52:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:52:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:52:14 compute-0 bash[164524]: 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449
Dec 06 09:52:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:52:14 compute-0 podman[164580]: 2025-12-06 09:52:14.588815916 +0000 UTC m=+0.220997056 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 09:52:14 compute-0 podman[164673]: 2025-12-06 09:52:14.824027842 +0000 UTC m=+0.071149837 container create b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:52:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:52:14 compute-0 podman[164673]: 2025-12-06 09:52:14.775096995 +0000 UTC m=+0.022219000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:14 compute-0 systemd[1]: Started libpod-conmon-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope.
Dec 06 09:52:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:15 compute-0 podman[164673]: 2025-12-06 09:52:15.0362272 +0000 UTC m=+0.283349185 container init b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:52:15 compute-0 podman[164673]: 2025-12-06 09:52:15.045257323 +0000 UTC m=+0.292379288 container start b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:52:15 compute-0 infallible_ptolemy[164691]: 167 167
Dec 06 09:52:15 compute-0 systemd[1]: libpod-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope: Deactivated successfully.
Dec 06 09:52:15 compute-0 podman[164673]: 2025-12-06 09:52:15.054447941 +0000 UTC m=+0.301569916 container attach b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:52:15 compute-0 podman[164673]: 2025-12-06 09:52:15.055031347 +0000 UTC m=+0.302153322 container died b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 09:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea82101dbf0191c4d291058006dbbe72803bfe89c7263f42a7abf8394f88b308-merged.mount: Deactivated successfully.
Dec 06 09:52:15 compute-0 podman[164673]: 2025-12-06 09:52:15.138140906 +0000 UTC m=+0.385262861 container remove b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_ptolemy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:52:15 compute-0 systemd[1]: libpod-conmon-b58337223544e32e7c7becaa73eb2b3eba657e36bbe025ed0d9be5cfe26b935e.scope: Deactivated successfully.
Dec 06 09:52:15 compute-0 podman[164715]: 2025-12-06 09:52:15.278085587 +0000 UTC m=+0.023439043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:15 compute-0 podman[164715]: 2025-12-06 09:52:15.380561218 +0000 UTC m=+0.125914654 container create 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:52:15 compute-0 systemd[1]: Started libpod-conmon-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope.
Dec 06 09:52:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:15 compute-0 ceph-mon[74327]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 0 B/s wr, 159 op/s
Dec 06 09:52:15 compute-0 podman[164715]: 2025-12-06 09:52:15.69571385 +0000 UTC m=+0.441067336 container init 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:52:15 compute-0 podman[164715]: 2025-12-06 09:52:15.703937602 +0000 UTC m=+0.449291038 container start 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:52:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:15.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:16 compute-0 podman[164715]: 2025-12-06 09:52:16.009227287 +0000 UTC m=+0.754580773 container attach 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 09:52:16 compute-0 cool_dhawan[164732]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:52:16 compute-0 cool_dhawan[164732]: --> All data devices are unavailable
Dec 06 09:52:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Dec 06 09:52:16 compute-0 systemd[1]: libpod-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope: Deactivated successfully.
Dec 06 09:52:16 compute-0 podman[164715]: 2025-12-06 09:52:16.057697482 +0000 UTC m=+0.803050918 container died 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-10171bf79f20345fc2b7f80915a2f18319e0c7619122fa17180caa65b3dc4c50-merged.mount: Deactivated successfully.
Dec 06 09:52:16 compute-0 sudo[164885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oviybflqgwgybewnnqxubfcyssuveosm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014736.0902443-308-237019699643239/AnsiballZ_file.py'
Dec 06 09:52:16 compute-0 sudo[164885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:16 compute-0 python3.9[164887]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:16 compute-0 sudo[164885]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:52:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:17 compute-0 sudo[165038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiaovwtxhmyahoshmnubryuebahtmvcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014736.8787422-308-55110785377991/AnsiballZ_file.py'
Dec 06 09:52:17 compute-0 sudo[165038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:17 compute-0 python3.9[165040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:17 compute-0 sudo[165038]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:17 compute-0 ceph-mon[74327]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Dec 06 09:52:17 compute-0 sudo[165191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkflsuuqzmhvlgvtxkrnpvgwgzfxoxqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014737.5046465-308-36193839083957/AnsiballZ_file.py'
Dec 06 09:52:17 compute-0 sudo[165191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:17.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:17 compute-0 python3.9[165193]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:17 compute-0 sudo[165191]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 85 B/s wr, 70 op/s
Dec 06 09:52:18 compute-0 podman[164715]: 2025-12-06 09:52:18.130138451 +0000 UTC m=+2.875491897 container remove 0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:52:18 compute-0 systemd[1]: libpod-conmon-0780b813b1c400d4680915e292459a53f515aee6980118859f6f98b4ee0572bd.scope: Deactivated successfully.
Dec 06 09:52:18 compute-0 sudo[164462]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:18 compute-0 sudo[165270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:52:18 compute-0 sudo[165270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:18 compute-0 sudo[165270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:18 compute-0 sudo[165295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:52:18 compute-0 sudo[165295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:18 compute-0 sudo[165393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrszpzbahqcnsrwrabhczzzerjyjyyxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014738.1111283-308-83377250328833/AnsiballZ_file.py'
Dec 06 09:52:18 compute-0 sudo[165393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:18 compute-0 python3.9[165395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:18 compute-0 sudo[165393]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:18 compute-0 podman[165439]: 2025-12-06 09:52:18.630227835 +0000 UTC m=+0.022048516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:19 compute-0 sudo[165603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enmffhglvxvvrghxeybrcxbjridvqtwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014738.7653856-308-82995027372887/AnsiballZ_file.py'
Dec 06 09:52:19 compute-0 sudo[165603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:19 compute-0 python3.9[165605]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:19 compute-0 sudo[165603]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 09:52:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 09:52:19 compute-0 sudo[165756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrqnarqfutbayfesihirobcoipjyudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014739.5105731-308-45753382865561/AnsiballZ_file.py'
Dec 06 09:52:19 compute-0 sudo[165756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:19 compute-0 podman[165439]: 2025-12-06 09:52:19.91161898 +0000 UTC m=+1.303439631 container create 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 09:52:19 compute-0 ceph-mon[74327]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 85 B/s wr, 70 op/s
Dec 06 09:52:19 compute-0 systemd[1]: Started libpod-conmon-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope.
Dec 06 09:52:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:20 compute-0 python3.9[165758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:20 compute-0 sudo[165756]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:20 compute-0 podman[165439]: 2025-12-06 09:52:20.380180954 +0000 UTC m=+1.772001625 container init 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:52:20 compute-0 podman[165439]: 2025-12-06 09:52:20.386779092 +0000 UTC m=+1.778599743 container start 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:52:20 compute-0 dreamy_leakey[165761]: 167 167
Dec 06 09:52:20 compute-0 systemd[1]: libpod-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope: Deactivated successfully.
Dec 06 09:52:20 compute-0 conmon[165761]: conmon 4f2445c460b9871b26ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope/container/memory.events
Dec 06 09:52:20 compute-0 sudo[165916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undfhnrdumyzxszmtiyrlvklrtxfehnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014740.1630175-308-9689175166325/AnsiballZ_file.py'
Dec 06 09:52:20 compute-0 sudo[165916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:20 compute-0 python3.9[165923]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:20 compute-0 podman[165439]: 2025-12-06 09:52:20.604243391 +0000 UTC m=+1.996064062 container attach 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:52:20 compute-0 podman[165439]: 2025-12-06 09:52:20.606200384 +0000 UTC m=+1.998021055 container died 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 09:52:20 compute-0 sudo[165916]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:20 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:52:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:20 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:52:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:52:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:52:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8915e4e32f1597136a3b017761d1d8df0d6593be96dc5bc5eab2861f78a7a45a-merged.mount: Deactivated successfully.
Dec 06 09:52:21 compute-0 ceph-mon[74327]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:21 compute-0 podman[165439]: 2025-12-06 09:52:21.617167152 +0000 UTC m=+3.008987813 container remove 4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 09:52:21 compute-0 systemd[1]: libpod-conmon-4f2445c460b9871b26ac7edcbfcc621b37a9d0ba77dd79d8c784c190debfe846.scope: Deactivated successfully.
Dec 06 09:52:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:21 compute-0 podman[165964]: 2025-12-06 09:52:21.854916538 +0000 UTC m=+0.107179978 container create 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:52:21 compute-0 podman[165964]: 2025-12-06 09:52:21.771995435 +0000 UTC m=+0.024258905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:21 compute-0 systemd[1]: Started libpod-conmon-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope.
Dec 06 09:52:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:22 compute-0 podman[165964]: 2025-12-06 09:52:22.014422006 +0000 UTC m=+0.266685486 container init 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:52:22 compute-0 podman[165964]: 2025-12-06 09:52:22.022250497 +0000 UTC m=+0.274513937 container start 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:52:22 compute-0 podman[165964]: 2025-12-06 09:52:22.025752251 +0000 UTC m=+0.278015741 container attach 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 09:52:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:22 compute-0 sudo[166113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihtdjuauiykinenujcnlxygdlszhrqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014741.9049926-458-175803095675634/AnsiballZ_file.py'
Dec 06 09:52:22 compute-0 sudo[166113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:22 compute-0 jolly_chaum[166008]: {
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:     "1": [
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:         {
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "devices": [
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "/dev/loop3"
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             ],
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "lv_name": "ceph_lv0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "lv_size": "21470642176",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "name": "ceph_lv0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "tags": {
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.cluster_name": "ceph",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.crush_device_class": "",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.encrypted": "0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.osd_id": "1",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.type": "block",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.vdo": "0",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:                 "ceph.with_tpm": "0"
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             },
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "type": "block",
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:             "vg_name": "ceph_vg0"
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:         }
Dec 06 09:52:22 compute-0 jolly_chaum[166008]:     ]
Dec 06 09:52:22 compute-0 jolly_chaum[166008]: }
Dec 06 09:52:22 compute-0 systemd[1]: libpod-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope: Deactivated successfully.
Dec 06 09:52:22 compute-0 podman[165964]: 2025-12-06 09:52:22.33561144 +0000 UTC m=+0.587874900 container died 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:52:22 compute-0 podman[166115]: 2025-12-06 09:52:22.356591495 +0000 UTC m=+0.100464357 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 09:52:22 compute-0 python3.9[166117]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:22 compute-0 sudo[166113]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7908a89d7cc6ea2a3a3a83374f860c7f8781621f4392b6079061d788db3c041-merged.mount: Deactivated successfully.
Dec 06 09:52:22 compute-0 ceph-mon[74327]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:22 compute-0 podman[165964]: 2025-12-06 09:52:22.626341913 +0000 UTC m=+0.878605353 container remove 8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 09:52:22 compute-0 sudo[165295]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:22 compute-0 systemd[1]: libpod-conmon-8bea156b1cb22a1190b44dd87c5d027952b52b545a2f283639ff6baa292b581d.scope: Deactivated successfully.
Dec 06 09:52:22 compute-0 sudo[166211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:52:22 compute-0 sudo[166211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:22 compute-0 sudo[166211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:22 compute-0 sudo[166252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:52:22 compute-0 sudo[166252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:23 compute-0 sudo[166385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovoemhbbydoknqibgumpskeqvwdamhrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014742.6538339-458-222872404891573/AnsiballZ_file.py'
Dec 06 09:52:23 compute-0 sudo[166385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.196513545 +0000 UTC m=+0.022479836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:23 compute-0 python3.9[166394]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:23 compute-0 sudo[166385]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.367079471 +0000 UTC m=+0.193045762 container create 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:52:23 compute-0 systemd[1]: Started libpod-conmon-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope.
Dec 06 09:52:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.450201571 +0000 UTC m=+0.276167862 container init 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.459727677 +0000 UTC m=+0.285693968 container start 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.46427045 +0000 UTC m=+0.290236721 container attach 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:52:23 compute-0 youthful_turing[166437]: 167 167
Dec 06 09:52:23 compute-0 systemd[1]: libpod-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope: Deactivated successfully.
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.469033328 +0000 UTC m=+0.294999619 container died 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 09:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a85d94bfad40f018ee5ad491b22faca1ee45a05965a4fe5e955544494945894-merged.mount: Deactivated successfully.
Dec 06 09:52:23 compute-0 podman[166395]: 2025-12-06 09:52:23.512069758 +0000 UTC m=+0.338036029 container remove 1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 09:52:23 compute-0 systemd[1]: libpod-conmon-1e6dac3e3534ca521bf8d87cdad22a4d4c7afeaaa1f5453d037a413ebd272f81.scope: Deactivated successfully.
Dec 06 09:52:23 compute-0 podman[166539]: 2025-12-06 09:52:23.696933379 +0000 UTC m=+0.056462133 container create 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:52:23 compute-0 systemd[1]: Started libpod-conmon-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope.
Dec 06 09:52:23 compute-0 sudo[166602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rggbeouxceqtiazixpsmodmngsndcthr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014743.4724388-458-220126275371504/AnsiballZ_file.py'
Dec 06 09:52:23 compute-0 sudo[166602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:23 compute-0 podman[166539]: 2025-12-06 09:52:23.675615884 +0000 UTC m=+0.035144638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:52:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:52:23 compute-0 podman[166539]: 2025-12-06 09:52:23.796290115 +0000 UTC m=+0.155818869 container init 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:52:23 compute-0 podman[166539]: 2025-12-06 09:52:23.804735683 +0000 UTC m=+0.164264437 container start 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:52:23
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.nfs', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:52:23 compute-0 podman[166539]: 2025-12-06 09:52:23.809012018 +0000 UTC m=+0.168540772 container attach 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:52:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:52:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:24 compute-0 python3.9[166607]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:24 compute-0 sudo[166602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:52:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:52:24 compute-0 sudo[166827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjipspksgkuxsobhblfozczlamqermvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014744.140234-458-17192427351889/AnsiballZ_file.py'
Dec 06 09:52:24 compute-0 sudo[166827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:24 compute-0 lvm[166830]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:52:24 compute-0 lvm[166830]: VG ceph_vg0 finished
Dec 06 09:52:24 compute-0 pensive_feynman[166604]: {}
Dec 06 09:52:24 compute-0 systemd[1]: libpod-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Deactivated successfully.
Dec 06 09:52:24 compute-0 systemd[1]: libpod-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Consumed 1.103s CPU time.
Dec 06 09:52:24 compute-0 podman[166539]: 2025-12-06 09:52:24.56956768 +0000 UTC m=+0.929096414 container died 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea94e619fde317ba34fb59ec9485824daeeb4cafa256324773377d2c8366fa82-merged.mount: Deactivated successfully.
Dec 06 09:52:24 compute-0 podman[166539]: 2025-12-06 09:52:24.61557486 +0000 UTC m=+0.975103604 container remove 8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:52:24 compute-0 python3.9[166831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:24 compute-0 systemd[1]: libpod-conmon-8e049273a3c345fc909b90563301c2eed7d20a4cba545d3d002a8de7cf78b2cc.scope: Deactivated successfully.
Dec 06 09:52:24 compute-0 sudo[166827]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:24 compute-0 sudo[166252]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:52:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:52:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:24 compute-0 sudo[166869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:52:24 compute-0 sudo[166869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:24 compute-0 sudo[166869]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:24 compute-0 ceph-mon[74327]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:52:25 compute-0 sudo[167020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfpolrfmxacsjyxzljekrcyglaoxoaok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014744.7803533-458-159826338453952/AnsiballZ_file.py'
Dec 06 09:52:25 compute-0 sudo[167020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:25 compute-0 python3.9[167022]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:25 compute-0 sudo[167020]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:25 compute-0 sudo[167173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkbvcllimpgzwdfaounoxssbvdmoeox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014745.3695664-458-246415163698549/AnsiballZ_file.py'
Dec 06 09:52:25 compute-0 sudo[167173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:25 compute-0 python3.9[167175]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:25 compute-0 sudo[167173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:25.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:26 compute-0 sudo[167325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgzaxnxuyzwelbicatruklkxuihrhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014745.9383337-458-197348961537101/AnsiballZ_file.py'
Dec 06 09:52:26 compute-0 sudo[167325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:26 compute-0 python3.9[167327]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:26 compute-0 sudo[167325]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:52:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:26 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:52:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:27.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:27.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:52:27 compute-0 ceph-mon[74327]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:27 compute-0 sudo[167493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etahasnfnaxrrsudkscgavrmojjifath ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014746.8562388-611-238251999719758/AnsiballZ_command.py'
Dec 06 09:52:27 compute-0 sudo[167493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:27 compute-0 python3.9[167495]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:27 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd604000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:27 compute-0 sudo[167493]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:27 compute-0 sudo[167523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:52:27 compute-0 sudo[167523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:27 compute-0 sudo[167523]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:27 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:52:28 compute-0 python3.9[167673]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:52:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:28 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8001550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:28 compute-0 sudo[167823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asexfhrpenoydpcscykcxqvuxopiymxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014748.694143-665-73597346453976/AnsiballZ_systemd_service.py'
Dec 06 09:52:28 compute-0 sudo[167823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:29 compute-0 ceph-mon[74327]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:52:29 compute-0 python3.9[167825]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:52:29 compute-0 systemd[1]: Reloading.
Dec 06 09:52:29 compute-0 systemd-sysv-generator[167858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:52:29 compute-0 systemd-rc-local-generator[167852]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:52:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:29 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0011d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:29 compute-0 sudo[167823]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:29 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4000f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:30 compute-0 sudo[168013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpusiyiugmeypbadyujdfpkxrshhell ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014749.9071398-689-75435309190804/AnsiballZ_command.py'
Dec 06 09:52:30 compute-0 sudo[168013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:30 compute-0 python3.9[168015]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:30 compute-0 sudo[168013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095230 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:52:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:30 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:30 compute-0 sudo[168166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilxvkjmwpuctsmceoyvtjzgffzajlsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014750.4969969-689-91150644890433/AnsiballZ_command.py'
Dec 06 09:52:30 compute-0 sudo[168166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:52:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:52:30 compute-0 python3.9[168168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:30 compute-0 sudo[168166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:31 compute-0 ceph-mon[74327]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:31 compute-0 sudo[168320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxcyxndsyvnrhbslggttzjjfjxvzhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014751.097292-689-262375289260061/AnsiballZ_command.py'
Dec 06 09:52:31 compute-0 sudo[168320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:31 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:31 compute-0 python3.9[168322]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:31 compute-0 sudo[168320]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:31 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:31 compute-0 sudo[168474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjajhbdzwlkjjzbgaivdafpwrylcvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014751.6701436-689-214918297327000/AnsiballZ_command.py'
Dec 06 09:52:31 compute-0 sudo[168474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:32 compute-0 python3.9[168476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:32 compute-0 sudo[168474]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:32 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:32 compute-0 sudo[168627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txzwpjyptleatkhtmzqlfrvniproftgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014752.2683926-689-269187144618838/AnsiballZ_command.py'
Dec 06 09:52:32 compute-0 sudo[168627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:32 compute-0 python3.9[168629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:32 compute-0 sudo[168627]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:33 compute-0 ceph-mon[74327]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:33 compute-0 sudo[168781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrwwqnrtqtkcftmkbeuwspijwhqaprv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014752.910314-689-262916387180217/AnsiballZ_command.py'
Dec 06 09:52:33 compute-0 sudo[168781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:33 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:33 compute-0 python3.9[168783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:33 compute-0 sudo[168781]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:33 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:33 compute-0 sudo[168935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmqbltgjszscikebwlmfiiyumesnpmph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014753.6001656-689-269630729497144/AnsiballZ_command.py'
Dec 06 09:52:33 compute-0 sudo[168935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:34 compute-0 python3.9[168937]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:34 compute-0 sudo[168935]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:34 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:35 compute-0 sudo[169089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfnnjbhivbbcbxnkuzcebpqtuyvkmcwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014754.7276754-851-27138350748507/AnsiballZ_getent.py'
Dec 06 09:52:35 compute-0 sudo[169089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:35 compute-0 ceph-mon[74327]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:52:35 compute-0 python3.9[169091]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 06 09:52:35 compute-0 sudo[169089]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:35 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:35 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:36 compute-0 sudo[169243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglbtgjarvqgtvouetvyxyeqqixxhecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014755.5673316-875-167229246452832/AnsiballZ_group.py'
Dec 06 09:52:36 compute-0 sudo[169243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:36 compute-0 python3.9[169245]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:52:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:36 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:36 compute-0 groupadd[169246]: group added to /etc/group: name=libvirt, GID=42473
Dec 06 09:52:36 compute-0 groupadd[169246]: group added to /etc/gshadow: name=libvirt
Dec 06 09:52:36 compute-0 ceph-mon[74327]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:36 compute-0 groupadd[169246]: new group: name=libvirt, GID=42473
Dec 06 09:52:36 compute-0 sudo[169243]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:37.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:52:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:37.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:52:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:37 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:37 compute-0 sudo[169403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijufkdwcskjeldxaahmayjredbfmntza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014756.9977672-899-182254053941569/AnsiballZ_user.py'
Dec 06 09:52:37 compute-0 sudo[169403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:37 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:37 compute-0 python3.9[169405]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:52:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:52:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:52:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:52:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:52:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:38 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:38 compute-0 useradd[169407]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 09:52:38 compute-0 ceph-mon[74327]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:52:38 compute-0 sudo[169403]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:52:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:39 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:39 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:39 compute-0 sudo[169566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvahahsrwggknczaeclhvenuphkjtvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014759.536339-932-58962050880669/AnsiballZ_setup.py'
Dec 06 09:52:39 compute-0 sudo[169566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:40 compute-0 python3.9[169568]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:52:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:40 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:40 compute-0 sudo[169566]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:52:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:52:40 compute-0 sudo[169650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdlvcsdovtansqifdqzcnboufzachpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014759.536339-932-58962050880669/AnsiballZ_dnf.py'
Dec 06 09:52:40 compute-0 sudo[169650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:41 compute-0 ceph-mon[74327]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:41 compute-0 python3.9[169652]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:52:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:41 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:41 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:42 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:43 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:43 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:44 compute-0 ceph-mon[74327]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:44 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:45 compute-0 ceph-mon[74327]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:45 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:45 compute-0 podman[169666]: 2025-12-06 09:52:45.488097836 +0000 UTC m=+0.109867862 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 06 09:52:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:45 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:45.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:45.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:46 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:47.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:52:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:47 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:47 compute-0 ceph-mon[74327]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:47 compute-0 sudo[169695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:52:47 compute-0 sudo[169695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:52:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:47 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5e4003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:47 compute-0 sudo[169695]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:47.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:47.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:52:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:48 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:48 compute-0 ceph-mon[74327]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:52:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:49 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:49 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:52:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:52:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:49.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:50 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:50 compute-0 ceph-mon[74327]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:52:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:52:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:51 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:51 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:51.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:51.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:52 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:53 compute-0 ceph-mon[74327]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:53 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:53 compute-0 podman[169729]: 2025-12-06 09:52:53.424884983 +0000 UTC m=+0.061313795 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 06 09:52:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:53 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:53.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:52:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:52:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:52:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.223 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:52:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.224 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:52:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:52:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:52:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:52:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:54 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:55 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:55 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:55.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:55 compute-0 ceph-mon[74327]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:55.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:56 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:57.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:52:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:52:57.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:52:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:57 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5fc0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:57 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:52:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:52:57 compute-0 ceph-mon[74327]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:52:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:57.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:52:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:58 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:59 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:52:59 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:52:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:52:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:52:59.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:52:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:52:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:52:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:52:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:52:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:52:59 compute-0 ceph-mon[74327]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:53:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:00 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:53:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:53:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:01 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:01 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:01 compute-0 ceph-mon[74327]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:01.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:02 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:03 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:03 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:03 compute-0 ceph-mon[74327]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:04 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:04 compute-0 ceph-mon[74327]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:05 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:05 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:05.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:06 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:07.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:07.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:53:07 compute-0 ceph-mon[74327]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:07 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:07 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:07 compute-0 sudo[169826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:53:07 compute-0 sudo[169826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:07 compute-0 sudo[169826]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:53:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:08 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:53:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:09 compute-0 ceph-mon[74327]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:53:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:09 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:09 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f8002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:09.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:10 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:10 compute-0 ceph-mon[74327]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:11 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:11 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:12 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:13 compute-0 ceph-mon[74327]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:13 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:13 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:13.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:14 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:15 compute-0 ceph-mon[74327]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:15 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:15 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:15.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:16 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5f80043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:16 compute-0 podman[169975]: 2025-12-06 09:53:16.472868697 +0000 UTC m=+0.100655562 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:53:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:17 compute-0 ceph-mon[74327]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:17 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:17 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5ec0031f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:53:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[164577]: 06/12/2025 09:53:18 : epoch 6933fcce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd5cc003c10 fd 38 proxy ignored for local
Dec 06 09:53:18 compute-0 kernel: ganesha.nfsd[169753]: segfault at 50 ip 00007fd6acef232e sp 00007fd66dffa210 error 4 in libntirpc.so.5.8[7fd6aced7000+2c000] likely on CPU 2 (core 0, socket 2)
Dec 06 09:53:18 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:53:18 compute-0 systemd[1]: Started Process Core Dump (PID 170004/UID 0).
Dec 06 09:53:19 compute-0 ceph-mon[74327]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:53:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:19.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:22 compute-0 systemd-coredump[170005]: Process 164594 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007fd6acef232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:53:22 compute-0 ceph-mon[74327]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:22 compute-0 systemd[1]: systemd-coredump@4-170004-0.service: Deactivated successfully.
Dec 06 09:53:22 compute-0 systemd[1]: systemd-coredump@4-170004-0.service: Consumed 1.193s CPU time.
Dec 06 09:53:22 compute-0 podman[170014]: 2025-12-06 09:53:22.539088182 +0000 UTC m=+0.031405049 container died 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7626a1b0cf860c2c5b35de77ed4f479b9d5cb19d90798b1515dcfcd9ae27d8ae-merged.mount: Deactivated successfully.
Dec 06 09:53:22 compute-0 podman[170014]: 2025-12-06 09:53:22.585940856 +0000 UTC m=+0.078257703 container remove 93232cf7a3aa14b498eb360a2c2c9b048fb224223433b0172f5d74ecc111a449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:53:22 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:53:22 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:53:22 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.624s CPU time.
Dec 06 09:53:23 compute-0 ceph-mon[74327]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:53:23
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:53:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:53:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:53:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:53:24 compute-0 podman[170060]: 2025-12-06 09:53:24.436684495 +0000 UTC m=+0.066131208 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 09:53:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:25 compute-0 sudo[170079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:25 compute-0 sudo[170079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:25 compute-0 sudo[170079]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:25 compute-0 sudo[170105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:53:25 compute-0 sudo[170105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:25 compute-0 sudo[170105]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:53:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:53:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:53:25 compute-0 sudo[170167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:25 compute-0 sudo[170167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:25 compute-0 sudo[170167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:25 compute-0 sudo[170192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:53:25 compute-0 sudo[170192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:26 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:53:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:53:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.440090841 +0000 UTC m=+0.064993766 container create c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:53:26 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.399099699 +0000 UTC m=+0.024002614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:26 compute-0 systemd[1]: Started libpod-conmon-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope.
Dec 06 09:53:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.570107683 +0000 UTC m=+0.195010658 container init c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.579417311 +0000 UTC m=+0.204320236 container start c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:53:26 compute-0 nice_chebyshev[170276]: 167 167
Dec 06 09:53:26 compute-0 systemd[1]: libpod-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope: Deactivated successfully.
Dec 06 09:53:26 compute-0 conmon[170276]: conmon c3b19d3bcac140511b3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope/container/memory.events
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.585351515 +0000 UTC m=+0.210254480 container attach c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.588167032 +0000 UTC m=+0.213069957 container died c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-da2d3339044fe5a5d0fe5bf36f3553f698d939baa721771d79bb41633846890d-merged.mount: Deactivated successfully.
Dec 06 09:53:26 compute-0 podman[170260]: 2025-12-06 09:53:26.648793898 +0000 UTC m=+0.273696823 container remove c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_chebyshev, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 09:53:26 compute-0 systemd[1]: libpod-conmon-c3b19d3bcac140511b3ee1d049a79c12722e79e3853590c0861e0db5b82d4782.scope: Deactivated successfully.
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:53:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:53:26 compute-0 ceph-mon[74327]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:53:26 compute-0 podman[170302]: 2025-12-06 09:53:26.842512569 +0000 UTC m=+0.055630788 container create c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:53:26 compute-0 systemd[1]: Started libpod-conmon-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope.
Dec 06 09:53:26 compute-0 podman[170302]: 2025-12-06 09:53:26.812787838 +0000 UTC m=+0.025906067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:26 compute-0 podman[170302]: 2025-12-06 09:53:26.945654118 +0000 UTC m=+0.158772317 container init c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:53:26 compute-0 podman[170302]: 2025-12-06 09:53:26.954127933 +0000 UTC m=+0.167246162 container start c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:53:26 compute-0 podman[170302]: 2025-12-06 09:53:26.958994096 +0000 UTC m=+0.172112275 container attach c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:53:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:27 compute-0 youthful_germain[170319]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:53:27 compute-0 youthful_germain[170319]: --> All data devices are unavailable
Dec 06 09:53:27 compute-0 systemd[1]: libpod-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope: Deactivated successfully.
Dec 06 09:53:27 compute-0 podman[170302]: 2025-12-06 09:53:27.293396015 +0000 UTC m=+0.506514234 container died c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 06 09:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a80d8106c8937bf5436b8ba2ea25253c34399237692f4156d8712910048ee5ce-merged.mount: Deactivated successfully.
Dec 06 09:53:27 compute-0 podman[170302]: 2025-12-06 09:53:27.366032312 +0000 UTC m=+0.579150531 container remove c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:53:27 compute-0 systemd[1]: libpod-conmon-c76d9f4ad3ff2f85fd8bb5a338aad2c104ee57aba1f91d3100064b0a287886f3.scope: Deactivated successfully.
Dec 06 09:53:27 compute-0 sudo[170192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:27 compute-0 sudo[170350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:27 compute-0 sudo[170350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:27 compute-0 sudo[170350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:27 compute-0 sudo[170375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:53:27 compute-0 sudo[170375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:27 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 09:53:27 compute-0 sudo[170421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:53:27 compute-0 sudo[170421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:27 compute-0 sudo[170421]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:27.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.061508285 +0000 UTC m=+0.057842169 container create e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 06 09:53:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:53:28 compute-0 systemd[1]: Started libpod-conmon-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope.
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.035218188 +0000 UTC m=+0.031552102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.163122252 +0000 UTC m=+0.159456176 container init e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.170703452 +0000 UTC m=+0.167037326 container start e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.173860269 +0000 UTC m=+0.170194143 container attach e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:53:28 compute-0 xenodochial_driscoll[170483]: 167 167
Dec 06 09:53:28 compute-0 systemd[1]: libpod-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope: Deactivated successfully.
Dec 06 09:53:28 compute-0 conmon[170483]: conmon e48b432e6e6b0f02b171 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope/container/memory.events
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.178668882 +0000 UTC m=+0.175002806 container died e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 09:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c6d659bf0583dac05bfe647bfd55d28bcc1b0f9a3252f86cfb8ebfa22e4b1fe-merged.mount: Deactivated successfully.
Dec 06 09:53:28 compute-0 podman[170467]: 2025-12-06 09:53:28.219278923 +0000 UTC m=+0.215612797 container remove e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_driscoll, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:53:28 compute-0 systemd[1]: libpod-conmon-e48b432e6e6b0f02b17108ae5b7b9bfe29d21eb0026e0e90b8b8947c34cdbb44.scope: Deactivated successfully.
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.382613145 +0000 UTC m=+0.042224877 container create 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 09:53:28 compute-0 systemd[1]: Started libpod-conmon-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope.
Dec 06 09:53:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.365894573 +0000 UTC m=+0.025506325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.47217645 +0000 UTC m=+0.131788202 container init 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:53:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095328 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.48015379 +0000 UTC m=+0.139765522 container start 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.483927945 +0000 UTC m=+0.143539697 container attach 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:53:28 compute-0 sharp_shirley[170522]: {
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:     "1": [
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:         {
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "devices": [
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "/dev/loop3"
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             ],
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "lv_name": "ceph_lv0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "lv_size": "21470642176",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "name": "ceph_lv0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "tags": {
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.cluster_name": "ceph",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.crush_device_class": "",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.encrypted": "0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.osd_id": "1",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.type": "block",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.vdo": "0",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:                 "ceph.with_tpm": "0"
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             },
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "type": "block",
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:             "vg_name": "ceph_vg0"
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:         }
Dec 06 09:53:28 compute-0 sharp_shirley[170522]:     ]
Dec 06 09:53:28 compute-0 sharp_shirley[170522]: }
Dec 06 09:53:28 compute-0 systemd[1]: libpod-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope: Deactivated successfully.
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.819426253 +0000 UTC m=+0.479037995 container died 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 09:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f24cfc21400835085a7f61dd479ec667dd968c22d8cd1acf5cecd661b7f736-merged.mount: Deactivated successfully.
Dec 06 09:53:28 compute-0 podman[170506]: 2025-12-06 09:53:28.855604122 +0000 UTC m=+0.515215854 container remove 7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shirley, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 09:53:28 compute-0 systemd[1]: libpod-conmon-7ec1ea26b3ad0c97b3cea9bbfb44945b20513958f7dc819f1f8d99168fef70e7.scope: Deactivated successfully.
Dec 06 09:53:28 compute-0 sudo[170375]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:28 compute-0 sudo[170542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:28 compute-0 sudo[170542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:28 compute-0 sudo[170542]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:29 compute-0 sudo[170567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:53:29 compute-0 sudo[170567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:29 compute-0 ceph-mon[74327]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.40245997 +0000 UTC m=+0.046717441 container create 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 09:53:29 compute-0 systemd[1]: Started libpod-conmon-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope.
Dec 06 09:53:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.382638122 +0000 UTC m=+0.026895643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.490270156 +0000 UTC m=+0.134527677 container init 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.496180109 +0000 UTC m=+0.140437590 container start 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.499634624 +0000 UTC m=+0.143892105 container attach 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:53:29 compute-0 upbeat_davinci[170650]: 167 167
Dec 06 09:53:29 compute-0 systemd[1]: libpod-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope: Deactivated successfully.
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.502124203 +0000 UTC m=+0.146381694 container died 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3235420f1c98a6f78b3461437c474a6421ce700d46ca95913e631bd56c769450-merged.mount: Deactivated successfully.
Dec 06 09:53:29 compute-0 podman[170633]: 2025-12-06 09:53:29.54108168 +0000 UTC m=+0.185339161 container remove 044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:53:29 compute-0 systemd[1]: libpod-conmon-044173b0b49583a38c46d743ddad2f2756f3a674abf6154f22f967754c7fd7fb.scope: Deactivated successfully.
Dec 06 09:53:29 compute-0 podman[170676]: 2025-12-06 09:53:29.714633924 +0000 UTC m=+0.044868690 container create 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 09:53:29 compute-0 systemd[1]: Started libpod-conmon-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope.
Dec 06 09:53:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:29 compute-0 podman[170676]: 2025-12-06 09:53:29.78869939 +0000 UTC m=+0.118934176 container init 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:53:29 compute-0 podman[170676]: 2025-12-06 09:53:29.694778636 +0000 UTC m=+0.025013452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:29 compute-0 podman[170676]: 2025-12-06 09:53:29.796847705 +0000 UTC m=+0.127082471 container start 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:53:29 compute-0 podman[170676]: 2025-12-06 09:53:29.800593409 +0000 UTC m=+0.130828175 container attach 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:53:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:29.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:30 compute-0 lvm[170767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:53:30 compute-0 lvm[170767]: VG ceph_vg0 finished
Dec 06 09:53:30 compute-0 priceless_ritchie[170693]: {}
Dec 06 09:53:30 compute-0 systemd[1]: libpod-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Deactivated successfully.
Dec 06 09:53:30 compute-0 systemd[1]: libpod-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Consumed 1.010s CPU time.
Dec 06 09:53:30 compute-0 podman[170676]: 2025-12-06 09:53:30.42357727 +0000 UTC m=+0.753812046 container died 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 09:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b58048860b62f05eabcd553933826b49a717f3a6347aee5d25e31a6ce13c858-merged.mount: Deactivated successfully.
Dec 06 09:53:30 compute-0 podman[170676]: 2025-12-06 09:53:30.473586811 +0000 UTC m=+0.803821577 container remove 2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:53:30 compute-0 systemd[1]: libpod-conmon-2d3fc80aca987b78f682a4626c2a4955833644e1b53e913ee018534870d8178a.scope: Deactivated successfully.
Dec 06 09:53:30 compute-0 sudo[170567]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:53:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:53:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:30 compute-0 sudo[170782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:53:30 compute-0 sudo[170782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:30 compute-0 sudo[170782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:53:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:53:31 compute-0 ceph-mon[74327]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:53:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:32 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 5.
Dec 06 09:53:32 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:53:32 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.624s CPU time.
Dec 06 09:53:32 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:53:33 compute-0 podman[170862]: 2025-12-06 09:53:33.14311928 +0000 UTC m=+0.070818738 container create c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:53:33 compute-0 podman[170862]: 2025-12-06 09:53:33.110219781 +0000 UTC m=+0.037919309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:53:33 compute-0 ceph-mon[74327]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:33 compute-0 podman[170862]: 2025-12-06 09:53:33.232628223 +0000 UTC m=+0.160327741 container init c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:53:33 compute-0 podman[170862]: 2025-12-06 09:53:33.241138658 +0000 UTC m=+0.168838126 container start c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:53:33 compute-0 bash[170862]: c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732
Dec 06 09:53:33 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:53:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:53:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:33.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:35 compute-0 ceph-mon[74327]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:35.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:36 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:53:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:53:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:37.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:53:37 compute-0 ceph-mon[74327]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:53:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:53:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:39 compute-0 ceph-mon[74327]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:53:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:53:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:41 compute-0 ceph-mon[74327]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:43 compute-0 ceph-mon[74327]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 09:53:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:45 compute-0 ceph-mon[74327]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:46 compute-0 ceph-mon[74327]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:47.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:53:47 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 06 09:53:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:47 compute-0 podman[170956]: 2025-12-06 09:53:47.522283999 +0000 UTC m=+0.120726106 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 09:53:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:47 compute-0 sudo[170984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:53:48 compute-0 sudo[170984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:53:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:48 compute-0 sudo[170984]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095348 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:53:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:49 compute-0 ceph-mon[74327]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:53:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 09:53:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 09:53:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:53:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:51 compute-0 ceph-mon[74327]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:53 compute-0 ceph-mon[74327]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:53:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:53:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:53:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.224 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:53:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:53:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:53:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:53:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:53:54 compute-0 ceph-mon[74327]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:53:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:55 compute-0 podman[175306]: 2025-12-06 09:53:55.455337327 +0000 UTC m=+0.081370809 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 09:53:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:53:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:53:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:53:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:53:57.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:53:57 compute-0 ceph-mon[74327]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:53:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:53:58.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:53:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:53:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:59 compute-0 ceph-mon[74327]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=sqlstore.transactions t=2025-12-06T09:53:59.577024905Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T09:53:59.615954538Z level=info msg="Completed cleanup jobs" duration=52.741907ms
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T09:53:59.71028265Z level=info msg="Update check succeeded" duration=45.174332ms
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T09:53:59.714168366Z level=info msg="Update check succeeded" duration=87.979992ms
Dec 06 09:53:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:53:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:53:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:53:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:53:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:53:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:53:59.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:54:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:54:01 compute-0 ceph-mon[74327]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:01.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:02 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:03 compute-0 ceph-mon[74327]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 09:54:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:03.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 09:54:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:54:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:04 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:04 compute-0 ceph-mon[74327]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:54:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:06 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:06 compute-0 ceph-mon[74327]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:07.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:07.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:08 compute-0 sudo[181868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:54:08 compute-0 sudo[181868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:08 compute-0 sudo[181868]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:08 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:54:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:09 compute-0 ceph-mon[74327]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:10 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:54:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:54:11 compute-0 ceph-mon[74327]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:12 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:12 compute-0 ceph-mon[74327]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:14.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:54:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:15 compute-0 ceph-mon[74327]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:54:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:16.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:16 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:54:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:17.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:54:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:18 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:18 compute-0 podman[186895]: 2025-12-06 09:54:18.549597905 +0000 UTC m=+0.143866444 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:54:18 compute-0 ceph-mon[74327]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:19 compute-0 ceph-mon[74327]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:54:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 09:54:20 compute-0 ceph-mon[74327]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:22 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:23 compute-0 ceph-mon[74327]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:54:23
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:54:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:54:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:24.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:54:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:54:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:54:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:24 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095425 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:54:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:25 compute-0 ceph-mon[74327]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:54:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:26 compute-0 podman[187926]: 2025-12-06 09:54:26.463217607 +0000 UTC m=+0.066166321 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 09:54:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:26 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:26 compute-0 ceph-mon[74327]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:54:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:54:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:28.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:28 compute-0 sudo[187947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:54:28 compute-0 sudo[187947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:28 compute-0 sudo[187947]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:28 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:29 compute-0 ceph-mon[74327]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:54:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:30.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:30 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:30 compute-0 sudo[187974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:54:30 compute-0 sudo[187974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:30 compute-0 sudo[187974]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:54:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 09:54:30 compute-0 sudo[187999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:54:30 compute-0 sudo[187999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:31 compute-0 ceph-mon[74327]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:31 compute-0 sudo[187999]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 09:54:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:32.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:32 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 09:54:33 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 09:54:33 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:33 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:54:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:54:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:54:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 09:54:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:34 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c58009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:34 compute-0 groupadd[188072]: group added to /etc/group: name=dnsmasq, GID=991
Dec 06 09:54:34 compute-0 groupadd[188072]: group added to /etc/gshadow: name=dnsmasq
Dec 06 09:54:34 compute-0 groupadd[188072]: new group: name=dnsmasq, GID=991
Dec 06 09:54:34 compute-0 useradd[188079]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 06 09:54:34 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:54:34 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 06 09:54:34 compute-0 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec 06 09:54:34 compute-0 ceph-mon[74327]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:54:34 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:34 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:34 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:54:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:54:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:54:35 compute-0 sudo[188092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:54:35 compute-0 sudo[188092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:35 compute-0 sudo[188092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:36 compute-0 sudo[188117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:54:36 compute-0 sudo[188117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:36 compute-0 groupadd[188142]: group added to /etc/group: name=clevis, GID=990
Dec 06 09:54:36 compute-0 groupadd[188142]: group added to /etc/gshadow: name=clevis
Dec 06 09:54:36 compute-0 groupadd[188142]: new group: name=clevis, GID=990
Dec 06 09:54:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:36.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:36 compute-0 useradd[188151]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 06 09:54:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:36 compute-0 usermod[188161]: add 'clevis' to group 'tss'
Dec 06 09:54:36 compute-0 usermod[188161]: add 'clevis' to shadow group 'tss'
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.509027268 +0000 UTC m=+0.071291690 container create f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:54:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.469990881 +0000 UTC m=+0.032255343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:36 compute-0 systemd[1]: Started libpod-conmon-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope.
Dec 06 09:54:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.618455939 +0000 UTC m=+0.180720391 container init f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.626501617 +0000 UTC m=+0.188766029 container start f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.630209358 +0000 UTC m=+0.192473830 container attach f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 09:54:36 compute-0 agitated_kare[188226]: 167 167
Dec 06 09:54:36 compute-0 systemd[1]: libpod-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope: Deactivated successfully.
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.635142981 +0000 UTC m=+0.197407363 container died f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a73f4c674f06cc2a45b36d7983b25964f762b956cd014dba73e49a7079d52f-merged.mount: Deactivated successfully.
Dec 06 09:54:36 compute-0 podman[188211]: 2025-12-06 09:54:36.709723119 +0000 UTC m=+0.271987511 container remove f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 09:54:36 compute-0 systemd[1]: libpod-conmon-f02d39e755e21b62540088e39c967c6a9cf48566f41545ff675ff46e13bc3b11.scope: Deactivated successfully.
Dec 06 09:54:36 compute-0 podman[188256]: 2025-12-06 09:54:36.921809579 +0000 UTC m=+0.056351017 container create 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:54:36 compute-0 ceph-mon[74327]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:36 compute-0 systemd[1]: Started libpod-conmon-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope.
Dec 06 09:54:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:54:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:54:36 compute-0 podman[188256]: 2025-12-06 09:54:36.900895152 +0000 UTC m=+0.035436610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:37.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:37.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:54:37 compute-0 podman[188256]: 2025-12-06 09:54:37.062096005 +0000 UTC m=+0.196637473 container init 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:54:37 compute-0 podman[188256]: 2025-12-06 09:54:37.073212586 +0000 UTC m=+0.207754024 container start 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 09:54:37 compute-0 podman[188256]: 2025-12-06 09:54:37.134792472 +0000 UTC m=+0.269333940 container attach 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:54:37 compute-0 sweet_diffie[188277]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:54:37 compute-0 sweet_diffie[188277]: --> All data devices are unavailable
Dec 06 09:54:37 compute-0 systemd[1]: libpod-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope: Deactivated successfully.
Dec 06 09:54:37 compute-0 podman[188256]: 2025-12-06 09:54:37.498021871 +0000 UTC m=+0.632563329 container died 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:54:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2d0055afa0071b483d36b7722aa775a7c3bb47d6521257f96309c63f95e960-merged.mount: Deactivated successfully.
Dec 06 09:54:37 compute-0 podman[188256]: 2025-12-06 09:54:37.571082098 +0000 UTC m=+0.705623576 container remove 3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:54:37 compute-0 systemd[1]: libpod-conmon-3f426d73f9ab99949ea842f3f1355b28f5697bbe27d7e17e69fc49c0490e1550.scope: Deactivated successfully.
Dec 06 09:54:37 compute-0 sudo[188117]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:37 compute-0 sudo[188311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:54:37 compute-0 sudo[188311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:37 compute-0 sudo[188311]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:37 compute-0 sudo[188336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:54:37 compute-0 sudo[188336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.205541928 +0000 UTC m=+0.040370684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.372668791 +0000 UTC m=+0.207497467 container create c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:54:38 compute-0 systemd[1]: Started libpod-conmon-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope.
Dec 06 09:54:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.470707944 +0000 UTC m=+0.305536670 container init c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.482196695 +0000 UTC m=+0.317025351 container start c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:54:38 compute-0 suspicious_northcutt[188419]: 167 167
Dec 06 09:54:38 compute-0 systemd[1]: libpod-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope: Deactivated successfully.
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.495334809 +0000 UTC m=+0.330163505 container attach c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.496403539 +0000 UTC m=+0.331232205 container died c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:54:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:38 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:38 compute-0 sshd-session[188248]: Received disconnect from 43.163.93.82 port 38180:11:  [preauth]
Dec 06 09:54:38 compute-0 sshd-session[188248]: Disconnected from authenticating user root 43.163.93.82 port 38180 [preauth]
Dec 06 09:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f93b2de88826d08d80749233cbe88df2319425a9868aa74bd4647e6b0dc5454d-merged.mount: Deactivated successfully.
Dec 06 09:54:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:54:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:38 compute-0 ceph-mon[74327]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:38 compute-0 podman[188403]: 2025-12-06 09:54:38.948553784 +0000 UTC m=+0.783382480 container remove c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_northcutt, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 09:54:38 compute-0 systemd[1]: libpod-conmon-c8bef967bd9926c172a487676b3f8e8d5296917c180f4ea92342788bdd696e4a.scope: Deactivated successfully.
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.206734821 +0000 UTC m=+0.052786080 container create c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:54:39 compute-0 systemd[1]: Started libpod-conmon-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope.
Dec 06 09:54:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.186631507 +0000 UTC m=+0.032682776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.296200612 +0000 UTC m=+0.142251901 container init c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.307931029 +0000 UTC m=+0.153982288 container start c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.312518114 +0000 UTC m=+0.158569383 container attach c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:54:39 compute-0 polkitd[43373]: Reloading rules
Dec 06 09:54:39 compute-0 polkitd[43373]: Collecting garbage unconditionally...
Dec 06 09:54:39 compute-0 polkitd[43373]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 09:54:39 compute-0 polkitd[43373]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 09:54:39 compute-0 polkitd[43373]: Finished loading, compiling and executing 3 rules
Dec 06 09:54:39 compute-0 polkitd[43373]: Reloading rules
Dec 06 09:54:39 compute-0 polkitd[43373]: Collecting garbage unconditionally...
Dec 06 09:54:39 compute-0 polkitd[43373]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 09:54:39 compute-0 polkitd[43373]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 09:54:39 compute-0 polkitd[43373]: Finished loading, compiling and executing 3 rules
Dec 06 09:54:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:39 compute-0 crazy_bohr[188462]: {
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:     "1": [
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:         {
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "devices": [
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "/dev/loop3"
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             ],
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "lv_name": "ceph_lv0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "lv_size": "21470642176",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "name": "ceph_lv0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "tags": {
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.cluster_name": "ceph",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.crush_device_class": "",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.encrypted": "0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.osd_id": "1",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.type": "block",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.vdo": "0",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:                 "ceph.with_tpm": "0"
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             },
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "type": "block",
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:             "vg_name": "ceph_vg0"
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:         }
Dec 06 09:54:39 compute-0 crazy_bohr[188462]:     ]
Dec 06 09:54:39 compute-0 crazy_bohr[188462]: }
Dec 06 09:54:39 compute-0 systemd[1]: libpod-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope: Deactivated successfully.
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.601392111 +0000 UTC m=+0.447443360 container died c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 09:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3fbefd1f207008146e23d8913278437c1fd7124c048c76b6b154a4838fe9b7-merged.mount: Deactivated successfully.
Dec 06 09:54:39 compute-0 podman[188445]: 2025-12-06 09:54:39.718810548 +0000 UTC m=+0.564861797 container remove c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:54:39 compute-0 systemd[1]: libpod-conmon-c9df3691b515e5f12d89adaad52f7f75101a8e3273f33425252d2fd9997ff48a.scope: Deactivated successfully.
Dec 06 09:54:39 compute-0 sudo[188336]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:39 compute-0 sudo[188548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:54:39 compute-0 sudo[188548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:39 compute-0 sudo[188548]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:39 compute-0 sudo[188584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:54:39 compute-0 sudo[188584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:54:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.386173458 +0000 UTC m=+0.048198735 container create 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:54:40 compute-0 systemd[1]: Started libpod-conmon-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope.
Dec 06 09:54:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.364912163 +0000 UTC m=+0.026937500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.47273749 +0000 UTC m=+0.134762797 container init 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.479045871 +0000 UTC m=+0.141071148 container start 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.482551056 +0000 UTC m=+0.144576353 container attach 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:54:40 compute-0 recursing_brown[188741]: 167 167
Dec 06 09:54:40 compute-0 systemd[1]: libpod-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope: Deactivated successfully.
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.486791811 +0000 UTC m=+0.148817088 container died 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:54:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-57eac60fa28e7fed0f347fd475667cf0d57e693d9a158b12536d56950ea01b1b-merged.mount: Deactivated successfully.
Dec 06 09:54:40 compute-0 podman[188712]: 2025-12-06 09:54:40.544657007 +0000 UTC m=+0.206682284 container remove 1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_brown, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:54:40 compute-0 systemd[1]: libpod-conmon-1e2129d2aefff488b6d7d5e68f61af8dc6459b6357ab9021e2f373e873f8d392.scope: Deactivated successfully.
Dec 06 09:54:40 compute-0 groupadd[188785]: group added to /etc/group: name=ceph, GID=167
Dec 06 09:54:40 compute-0 groupadd[188785]: group added to /etc/gshadow: name=ceph
Dec 06 09:54:40 compute-0 groupadd[188785]: new group: name=ceph, GID=167
Dec 06 09:54:40 compute-0 useradd[188802]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 06 09:54:40 compute-0 podman[188784]: 2025-12-06 09:54:40.743279852 +0000 UTC m=+0.068676540 container create f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 09:54:40 compute-0 systemd[1]: Started libpod-conmon-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope.
Dec 06 09:54:40 compute-0 podman[188784]: 2025-12-06 09:54:40.717174575 +0000 UTC m=+0.042571333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:54:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:40 compute-0 podman[188784]: 2025-12-06 09:54:40.85038709 +0000 UTC m=+0.175783808 container init f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:54:40 compute-0 podman[188784]: 2025-12-06 09:54:40.859665641 +0000 UTC m=+0.185062329 container start f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:54:40 compute-0 podman[188784]: 2025-12-06 09:54:40.863269499 +0000 UTC m=+0.188666187 container attach f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:54:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:54:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:54:40 compute-0 ceph-mon[74327]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:41 compute-0 lvm[188890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:54:41 compute-0 lvm[188890]: VG ceph_vg0 finished
Dec 06 09:54:41 compute-0 pedantic_cartwright[188814]: {}
Dec 06 09:54:41 compute-0 systemd[1]: libpod-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Deactivated successfully.
Dec 06 09:54:41 compute-0 podman[188784]: 2025-12-06 09:54:41.767550579 +0000 UTC m=+1.092947297 container died f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:54:41 compute-0 systemd[1]: libpod-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Consumed 1.322s CPU time.
Dec 06 09:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2df05a198671661c5d7deb5fc105c786e169169872ad92407f3d76e6b9009583-merged.mount: Deactivated successfully.
Dec 06 09:54:41 compute-0 podman[188784]: 2025-12-06 09:54:41.812017613 +0000 UTC m=+1.137414291 container remove f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 09:54:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:41 compute-0 systemd[1]: libpod-conmon-f10cc639dac922ad085c580d7825cefac844a1fe60aa7d7f7168e459057984d5.scope: Deactivated successfully.
Dec 06 09:54:41 compute-0 sudo[188584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:54:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:54:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:42 compute-0 sudo[188903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:54:42 compute-0 sudo[188903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:42 compute-0 sudo[188903]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:42 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:54:42 compute-0 ceph-mon[74327]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:54:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:44 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:44 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 06 09:54:44 compute-0 sshd[1005]: Received signal 15; terminating.
Dec 06 09:54:44 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 06 09:54:44 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 06 09:54:44 compute-0 systemd[1]: sshd.service: Consumed 2.954s CPU time, read 32.0K from disk, written 0B to disk.
Dec 06 09:54:44 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 06 09:54:44 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 06 09:54:44 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:54:44 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:54:44 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:54:44 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 06 09:54:44 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 06 09:54:44 compute-0 sshd[189618]: Server listening on 0.0.0.0 port 22.
Dec 06 09:54:44 compute-0 sshd[189618]: Server listening on :: port 22.
Dec 06 09:54:44 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 06 09:54:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:45 compute-0 ceph-mon[74327]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:54:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095445 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:54:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:46.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c340041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:54:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:47.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:54:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:54:47 compute-0 systemd[1]: Reloading.
Dec 06 09:54:47 compute-0 systemd-rc-local-generator[189882]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:47 compute-0 systemd-sysv-generator[189885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:47 compute-0 ceph-mon[74327]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:54:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:48 compute-0 sudo[190869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:54:48 compute-0 sudo[190869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:54:48 compute-0 sudo[190869]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:49 compute-0 podman[191990]: 2025-12-06 09:54:49.474711785 +0000 UTC m=+0.099600297 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 09:54:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:49 compute-0 ceph-mon[74327]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:54:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:50 compute-0 sudo[169650]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:54:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:54:50 compute-0 ceph-mon[74327]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:54:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:54:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:52 compute-0 sudo[194768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hngnmlhrdbznebwlaxumdgijyqnisnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014891.1171706-968-54475886048954/AnsiballZ_systemd.py'
Dec 06 09:54:52 compute-0 sudo[194768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:52 compute-0 python3.9[194790]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:54:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:52 compute-0 systemd[1]: Reloading.
Dec 06 09:54:52 compute-0 systemd-rc-local-generator[195176]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:52 compute-0 systemd-sysv-generator[195179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:52 compute-0 sudo[194768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:53 compute-0 ceph-mon[74327]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:53 compute-0 sudo[195912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrqtzbkoexndvrmnikdkjlhtqhuwfng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014893.0536666-968-281138537491613/AnsiballZ_systemd.py'
Dec 06 09:54:53 compute-0 sudo[195912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:53 compute-0 python3.9[195933]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:54:53 compute-0 systemd[1]: Reloading.
Dec 06 09:54:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:53 compute-0 systemd-rc-local-generator[196366]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:53 compute-0 systemd-sysv-generator[196373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:54:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:54:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:54:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:54:54 compute-0 sudo[195912]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:54:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:54:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:54:54.225 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:54:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:54 compute-0 sudo[197051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllxuvyluutvtnlrwwritqytieclucub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014894.3517404-968-233129575152062/AnsiballZ_systemd.py'
Dec 06 09:54:54 compute-0 sudo[197051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:54:54 compute-0 python3.9[197072]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:54:55 compute-0 systemd[1]: Reloading.
Dec 06 09:54:55 compute-0 systemd-rc-local-generator[197525]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:55 compute-0 systemd-sysv-generator[197528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:55 compute-0 ceph-mon[74327]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:54:55 compute-0 sudo[197051]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:55 compute-0 sudo[198364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luoruvlnmbxwedjuliwzmbhuraddscdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014895.6254985-968-263085251884442/AnsiballZ_systemd.py'
Dec 06 09:54:55 compute-0 sudo[198364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:54:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:54:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:56.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:54:56 compute-0 python3.9[198388]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:54:56 compute-0 systemd[1]: Reloading.
Dec 06 09:54:56 compute-0 systemd-sysv-generator[198649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:56 compute-0 systemd-rc-local-generator[198641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:56 compute-0 sudo[198364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:56 compute-0 podman[198877]: 2025-12-06 09:54:56.700716849 +0000 UTC m=+0.064656501 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:54:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:57.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:54:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:54:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:54:57 compute-0 sudo[199151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdjynwzteehkhgkuljsvqcvnpzlcydqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014896.868423-1055-247003337095591/AnsiballZ_systemd.py'
Dec 06 09:54:57 compute-0 sudo[199151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:57 compute-0 ceph-mon[74327]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:54:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:54:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:54:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.365s CPU time.
Dec 06 09:54:57 compute-0 systemd[1]: run-r1915e722052b45ebaafc73df41e557bb.service: Deactivated successfully.
Dec 06 09:54:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:57 compute-0 python3.9[199153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:54:57 compute-0 systemd[1]: Reloading.
Dec 06 09:54:57 compute-0 systemd-sysv-generator[199307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:57 compute-0 systemd-rc-local-generator[199303]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:54:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:54:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:54:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:54:58.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:54:58 compute-0 sudo[199151]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:54:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:58 compute-0 sudo[199460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohtjuoqwxozrphatqybdlabvmeqxcoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014898.2558873-1055-278018888656884/AnsiballZ_systemd.py'
Dec 06 09:54:58 compute-0 sudo[199460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:58 compute-0 python3.9[199462]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:54:59 compute-0 ceph-mon[74327]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:54:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:54:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:54:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:00 compute-0 systemd[1]: Reloading.
Dec 06 09:55:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:00.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:00 compute-0 systemd-rc-local-generator[199496]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:00 compute-0 systemd-sysv-generator[199500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:00 compute-0 sudo[199460]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:55:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:55:01 compute-0 sudo[199654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iksyttochdzqmdpjqaszfdwuqxioleyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014900.599581-1055-136840679093327/AnsiballZ_systemd.py'
Dec 06 09:55:01 compute-0 sudo[199654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:01 compute-0 python3.9[199656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:01 compute-0 ceph-mon[74327]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:01 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:02.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:02 compute-0 systemd[1]: Reloading.
Dec 06 09:55:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:02 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:02 compute-0 systemd-sysv-generator[199692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:02 compute-0 systemd-rc-local-generator[199689]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:02 compute-0 ceph-mon[74327]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:02 compute-0 sudo[199654]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:03 compute-0 sudo[199847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwklnrocfjsrqjeotmrxhekktnozylff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014903.028829-1055-142934179810880/AnsiballZ_systemd.py'
Dec 06 09:55:03 compute-0 sudo[199847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:03 compute-0 python3.9[199850]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:03 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:03 compute-0 sudo[199847]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:04 compute-0 sudo[200003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tamyfcoauadpkuewkbxtlfbcpsybpahr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014904.0028787-1055-118259584580204/AnsiballZ_systemd.py'
Dec 06 09:55:04 compute-0 sudo[200003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:04 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:04 compute-0 python3.9[200005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:04 compute-0 systemd[1]: Reloading.
Dec 06 09:55:04 compute-0 systemd-rc-local-generator[200039]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:04 compute-0 systemd-sysv-generator[200042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:05 compute-0 sudo[200003]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:05 compute-0 ceph-mon[74327]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:05 compute-0 sudo[200195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaadvngrqsgjteladrjscunuvifvlovp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014905.3315256-1163-80022827362233/AnsiballZ_systemd.py'
Dec 06 09:55:05 compute-0 sudo[200195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:05 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:05 compute-0 python3.9[200197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:55:06 compute-0 systemd[1]: Reloading.
Dec 06 09:55:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095506 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:55:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:06 compute-0 systemd-rc-local-generator[200228]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:06 compute-0 systemd-sysv-generator[200231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:06 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 06 09:55:06 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 06 09:55:06 compute-0 sudo[200195]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:06 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:55:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:55:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:07.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:07 compute-0 sudo[200389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdemmetejnlalblqjjceldbqdzzgdexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014906.7937026-1187-104984512552058/AnsiballZ_systemd.py'
Dec 06 09:55:07 compute-0 sudo[200389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:07 compute-0 ceph-mon[74327]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:07 compute-0 python3.9[200391]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:07 compute-0 sudo[200389]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:07 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:08 compute-0 sudo[200545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovaaqigifhevvjhwboeeesdxmoxxwmve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014907.8252625-1187-257080080858869/AnsiballZ_systemd.py'
Dec 06 09:55:08 compute-0 sudo[200545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:08 compute-0 sudo[200548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:55:08 compute-0 sudo[200548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:08 compute-0 sudo[200548]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:08 compute-0 python3.9[200547]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:08 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:08 compute-0 sudo[200545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:55:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:09 compute-0 ceph-mon[74327]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:09 compute-0 sudo[200726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgiftgsqwtelyvtsvpgjwvwgzvwiddvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014908.682359-1187-79010719712576/AnsiballZ_systemd.py'
Dec 06 09:55:09 compute-0 sudo[200726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:09 compute-0 python3.9[200728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:09 compute-0 sudo[200726]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:09 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:10 compute-0 sudo[200883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azxjzwhdvbwoclubukhxqsruwrzuljmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014909.864876-1187-205473320869240/AnsiballZ_systemd.py'
Dec 06 09:55:10 compute-0 sudo[200883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:10 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:10 compute-0 python3.9[200885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:10 compute-0 sudo[200883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:55:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:55:11 compute-0 sudo[201039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anlmzgwoognybgjlwgpwwgffpltvufpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014910.843565-1187-218007314281365/AnsiballZ_systemd.py'
Dec 06 09:55:11 compute-0 sudo[201039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:11 compute-0 ceph-mon[74327]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:11 compute-0 python3.9[201041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:11 compute-0 sudo[201039]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:11 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 09:55:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:12 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:12.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:12 compute-0 sudo[201195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoapcffffsyiiadpyogkbjkzxulmxbuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014911.7624257-1187-251526588646033/AnsiballZ_systemd.py'
Dec 06 09:55:12 compute-0 sudo[201195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:12 compute-0 python3.9[201197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:12 compute-0 sudo[201195]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:12 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:13 compute-0 sudo[201351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfamvxgbgsdtvdcnewrnhfyzimlagyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014912.7286663-1187-220165464795035/AnsiballZ_systemd.py'
Dec 06 09:55:13 compute-0 sudo[201351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:13 compute-0 ceph-mon[74327]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:55:13 compute-0 python3.9[201353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:13 compute-0 sudo[201351]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:13 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 09:55:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:14 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:14 compute-0 sudo[201507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwqioaswwgzftmkzpqasqgxrynkvlnas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014913.8914478-1187-66213496642199/AnsiballZ_systemd.py'
Dec 06 09:55:14 compute-0 sudo[201507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:55:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:14 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:14 compute-0 python3.9[201509]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:14 compute-0 sudo[201507]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:15 compute-0 sudo[201663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyuhoamftdjljxhffzgxnznshbrzwkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014914.8990717-1187-91875978214850/AnsiballZ_systemd.py'
Dec 06 09:55:15 compute-0 sudo[201663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:15 compute-0 ceph-mon[74327]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:15 compute-0 python3.9[201665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:15 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 09:55:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:16 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:16 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:16 compute-0 sudo[201663]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:17.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:17.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:17 compute-0 sudo[201821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvlrikwmwtnzoymmmhexmdwpbnvufddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014916.872734-1187-147272841608783/AnsiballZ_systemd.py'
Dec 06 09:55:17 compute-0 sudo[201821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:55:17 compute-0 ceph-mon[74327]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c580022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:17 compute-0 python3.9[201823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:17 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:18.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:18 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:18 compute-0 sudo[201821]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:19 compute-0 sudo[201977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbiueiistuaiqrygfoyqkgmryccqzupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014919.035014-1187-123590905868879/AnsiballZ_systemd.py'
Dec 06 09:55:19 compute-0 sudo[201977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:19 compute-0 ceph-mon[74327]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:19 compute-0 python3.9[201979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:19 compute-0 sudo[201977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:19 compute-0 podman[201982]: 2025-12-06 09:55:19.828173625 +0000 UTC m=+0.132433886 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:55:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:19 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 09:55:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 09:55:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:20 compute-0 sudo[202161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvihsatydvoauihwevnxhtvcgjenwnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014919.926697-1187-189838304154624/AnsiballZ_systemd.py'
Dec 06 09:55:20 compute-0 sudo[202161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:55:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:20 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:20 compute-0 python3.9[202163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:20 compute-0 sudo[202161]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:55:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 09:55:21 compute-0 sudo[202317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cghhndbdmfwatspszlcsbxtclbdifqpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014920.8685637-1187-224431677167975/AnsiballZ_systemd.py'
Dec 06 09:55:21 compute-0 sudo[202317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:21 compute-0 ceph-mon[74327]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:21 compute-0 python3.9[202319]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:21 compute-0 sudo[202317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:21 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:22.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:22 compute-0 sudo[202473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhigqawkhrwhjjwcwexlbqdebvhjsgjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014921.9082973-1187-107532628678895/AnsiballZ_systemd.py'
Dec 06 09:55:22 compute-0 sudo[202473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:22 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:22 compute-0 python3.9[202475]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:55:22 compute-0 ceph-mon[74327]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:22 compute-0 sudo[202473]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:23 compute-0 sudo[202630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpuujzckfyeezjzuyrtadcfuhjhnndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014923.1777062-1493-66051877275743/AnsiballZ_file.py'
Dec 06 09:55:23 compute-0 sudo[202630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:23 compute-0 python3.9[202632]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:23 compute-0 sudo[202630]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:55:23
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', '.nfs', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'images']
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:55:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:23 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:55:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:24.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:55:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:55:24 compute-0 sudo[202782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfgveizmljouohhtxnyljhgiqftqrvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014923.9172468-1493-214403918295130/AnsiballZ_file.py'
Dec 06 09:55:24 compute-0 sudo[202782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:55:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:55:24 compute-0 python3.9[202784]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:24 compute-0 sudo[202782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:24 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:24 compute-0 sudo[202934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuosabfrcoexkaxoagsoirzmzrltcsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014924.650038-1493-4152398540673/AnsiballZ_file.py'
Dec 06 09:55:24 compute-0 sudo[202934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:25 compute-0 ceph-mon[74327]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:55:25 compute-0 python3.9[202936]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:25 compute-0 sudo[202934]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:25 compute-0 sudo[203088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaitnimqkahbizclegyrqocbqsxeovyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014925.3134377-1493-192996946662688/AnsiballZ_file.py'
Dec 06 09:55:25 compute-0 sudo[203088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:25 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:25 compute-0 python3.9[203090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:26 compute-0 sudo[203088]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095526 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:55:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:26.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 09:55:26 compute-0 sudo[203240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmfrrzbxihwynmpmabmnixvhzhxpbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014926.1869977-1493-265294918994720/AnsiballZ_file.py'
Dec 06 09:55:26 compute-0 sudo[203240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:26 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:26 compute-0 python3.9[203242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:26 compute-0 sudo[203240]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:27.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:27 compute-0 ceph-mon[74327]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 09:55:27 compute-0 sudo[203404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfkzbvlmeuunasagixiedtpiguyaaxxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014926.8802469-1493-81846704970928/AnsiballZ_file.py'
Dec 06 09:55:27 compute-0 sudo[203404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:27 compute-0 podman[203367]: 2025-12-06 09:55:27.227735095 +0000 UTC m=+0.074766054 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:55:27 compute-0 python3.9[203411]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:27 compute-0 sudo[203404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:27 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:28.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:28 compute-0 sudo[203565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlbpomymjyvnrtmsvbdrvpebtifemrcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014927.643351-1622-248184091554598/AnsiballZ_stat.py'
Dec 06 09:55:28 compute-0 sudo[203565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:28 compute-0 python3.9[203567]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:28 compute-0 sudo[203565]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:28 compute-0 sudo[203570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:55:28 compute-0 sudo[203570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:28 compute-0 sudo[203570]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:28 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:28 compute-0 auditd[701]: Audit daemon rotating log files
Dec 06 09:55:28 compute-0 sudo[203715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltewgvospmmvhvcnfpatgjmbgbjpceiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014927.643351-1622-248184091554598/AnsiballZ_copy.py'
Dec 06 09:55:28 compute-0 sudo[203715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:29 compute-0 python3.9[203717]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014927.643351-1622-248184091554598/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:29 compute-0 sudo[203715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:29 compute-0 ceph-mon[74327]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:55:29 compute-0 sudo[203869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiclkoolypskzausuluiypnecwmvchji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014929.2686815-1622-172598953764159/AnsiballZ_stat.py'
Dec 06 09:55:29 compute-0 sudo[203869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:29 compute-0 python3.9[203871]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:29 compute-0 sudo[203869]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:29 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:30.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:30 compute-0 sudo[203994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqxkkpkkkaebmgrpjkfokenltiyougq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014929.2686815-1622-172598953764159/AnsiballZ_copy.py'
Dec 06 09:55:30 compute-0 sudo[203994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:30 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003700 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:30 compute-0 python3.9[203996]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014929.2686815-1622-172598953764159/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:30 compute-0 sudo[203994]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:55:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Dec 06 09:55:31 compute-0 sudo[204147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uypqctelwnqpwkndpqlukxhbhwzhiown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014930.7922695-1622-137857862385184/AnsiballZ_stat.py'
Dec 06 09:55:31 compute-0 sudo[204147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:31 compute-0 python3.9[204149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:31 compute-0 sudo[204147]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:31 compute-0 ceph-mon[74327]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:31 compute-0 sudo[204273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqpjrgmrsveglizfiqgddkyikbyetdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014930.7922695-1622-137857862385184/AnsiballZ_copy.py'
Dec 06 09:55:31 compute-0 sudo[204273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:31 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:32 compute-0 python3.9[204275]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014930.7922695-1622-137857862385184/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:32 compute-0 sudo[204273]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:32.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:32 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:32 compute-0 sudo[204425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igihzealxkffjiaypoxhzxvmonlkfgik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014932.2459354-1622-180131275088592/AnsiballZ_stat.py'
Dec 06 09:55:32 compute-0 sudo[204425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:32 compute-0 python3.9[204427]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:32 compute-0 sudo[204425]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:33 compute-0 sudo[204551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puzcsvfhilepdptwkhbzuprmmkzkrsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014932.2459354-1622-180131275088592/AnsiballZ_copy.py'
Dec 06 09:55:33 compute-0 sudo[204551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:33 compute-0 ceph-mon[74327]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:33 compute-0 python3.9[204553]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014932.2459354-1622-180131275088592/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:33 compute-0 sudo[204551]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:33 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:34.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:34 compute-0 sudo[204704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wraglaretybvrrygzstrmdaqnjivoryv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014933.7830455-1622-4009720158039/AnsiballZ_stat.py'
Dec 06 09:55:34 compute-0 sudo[204704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:34 compute-0 python3.9[204706]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:34 compute-0 sudo[204704]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:34 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:34 compute-0 sudo[204829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whfmyftylpkyjvpcfwrfpulruyulxtdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014933.7830455-1622-4009720158039/AnsiballZ_copy.py'
Dec 06 09:55:34 compute-0 sudo[204829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:35 compute-0 python3.9[204831]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014933.7830455-1622-4009720158039/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:35 compute-0 sudo[204829]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:35 compute-0 ceph-mon[74327]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:55:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:35 compute-0 sudo[204983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqiigviouxifsjishsbjozsuvlgrpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014935.3510785-1622-62454991196010/AnsiballZ_stat.py'
Dec 06 09:55:35 compute-0 sudo[204983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:35 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200038e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:35 compute-0 python3.9[204985]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:35 compute-0 sudo[204983]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:55:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:36.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:36.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:36 compute-0 sudo[205108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kseybovucsbrchfdptbrcmckovqcopvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014935.3510785-1622-62454991196010/AnsiballZ_copy.py'
Dec 06 09:55:36 compute-0 sudo[205108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:36 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:36 compute-0 python3.9[205110]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014935.3510785-1622-62454991196010/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:36 compute-0 sudo[205108]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:37 compute-0 sudo[205261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmibbdiiejqckqhvzzvmaanworgiosm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014936.841242-1622-279820373136681/AnsiballZ_stat.py'
Dec 06 09:55:37 compute-0 sudo[205261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:37 compute-0 python3.9[205263]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:37 compute-0 sudo[205261]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:37 compute-0 ceph-mon[74327]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:55:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:37 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:37 compute-0 sudo[205385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uljeqwgvfcdvvunbuoopbxbvphugseqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014936.841242-1622-279820373136681/AnsiballZ_copy.py'
Dec 06 09:55:37 compute-0 sudo[205385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:55:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:38 compute-0 python3.9[205387]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014936.841242-1622-279820373136681/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:38 compute-0 sudo[205385]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:38 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:38 compute-0 sudo[205537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqssdghtitbahedelpgiykwnwvqhqdhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014938.3662996-1622-245092066287171/AnsiballZ_stat.py'
Dec 06 09:55:38 compute-0 sudo[205537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:55:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:39 compute-0 python3.9[205539]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:39 compute-0 sudo[205537]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:39 compute-0 sudo[205664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhgavvwngdybtlrufapryfppjieuymfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014938.3662996-1622-245092066287171/AnsiballZ_copy.py'
Dec 06 09:55:39 compute-0 sudo[205664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:39 compute-0 ceph-mon[74327]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:55:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:39 compute-0 python3.9[205666]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014938.3662996-1622-245092066287171/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:39 compute-0 sudo[205664]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:39 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:40 compute-0 sudo[205816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeknbopksemydyquyvyvrafavhrrcjgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014940.0077758-1961-227734976573455/AnsiballZ_command.py'
Dec 06 09:55:40 compute-0 sudo[205816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:40 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a4f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:40 compute-0 python3.9[205818]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 06 09:55:40 compute-0 sudo[205816]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:55:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:55:41 compute-0 sudo[205970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecjdyfsoqqsioxbfyqvzfibwlkyrgqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014940.8749583-1988-256842094184912/AnsiballZ_file.py'
Dec 06 09:55:41 compute-0 sudo[205970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:41 compute-0 python3.9[205972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:41 compute-0 ceph-mon[74327]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:41 compute-0 sudo[205970]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:41 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:42 compute-0 sudo[206125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutycsuwgpwakydxkkdykgmabnhrefri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014941.7320096-1988-166435826982267/AnsiballZ_file.py'
Dec 06 09:55:42 compute-0 sudo[206125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:42 compute-0 sudo[206128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:42 compute-0 sudo[206128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:42 compute-0 sudo[206128]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:42 compute-0 python3.9[206127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:42 compute-0 sudo[206125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:42 compute-0 sudo[206153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 09:55:42 compute-0 sudo[206153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:42 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:42 compute-0 sudo[206371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wusupeltdsikfqwwwnupvlepfghdodzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014942.5419586-1988-188110626508146/AnsiballZ_file.py'
Dec 06 09:55:42 compute-0 sudo[206371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:43 compute-0 python3.9[206380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:43 compute-0 sudo[206371]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:43 compute-0 podman[206402]: 2025-12-06 09:55:43.093397749 +0000 UTC m=+0.077591031 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:55:43 compute-0 podman[206402]: 2025-12-06 09:55:43.214663741 +0000 UTC m=+0.198857003 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:55:43 compute-0 sudo[206637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxgwzxvtemejvixdxgmybnufumouhoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014943.2342548-1988-276370863217671/AnsiballZ_file.py'
Dec 06 09:55:43 compute-0 sudo[206637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:43 compute-0 ceph-mon[74327]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:43 compute-0 podman[206675]: 2025-12-06 09:55:43.68283437 +0000 UTC m=+0.050672292 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:43 compute-0 python3.9[206645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:43 compute-0 podman[206675]: 2025-12-06 09:55:43.690816556 +0000 UTC m=+0.058654478 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:43 compute-0 sudo[206637]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:43 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:44 compute-0 podman[206850]: 2025-12-06 09:55:44.015348098 +0000 UTC m=+0.055108282 container exec c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 09:55:44 compute-0 podman[206850]: 2025-12-06 09:55:44.054933769 +0000 UTC m=+0.094693943 container exec_died c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 09:55:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:55:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:44.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:44.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:44 compute-0 sudo[206964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnoumiznqvpzrtgrrowfvvdxezdsrutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014943.8603601-1988-108608371155944/AnsiballZ_file.py'
Dec 06 09:55:44 compute-0 sudo[206964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:44 compute-0 podman[206981]: 2025-12-06 09:55:44.254078718 +0000 UTC m=+0.052295026 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:55:44 compute-0 podman[206981]: 2025-12-06 09:55:44.264778908 +0000 UTC m=+0.062995206 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 09:55:44 compute-0 python3.9[206966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:44 compute-0 sudo[206964]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:44 compute-0 podman[207052]: 2025-12-06 09:55:44.471867482 +0000 UTC m=+0.056543871 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, distribution-scope=public, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Dec 06 09:55:44 compute-0 podman[207052]: 2025-12-06 09:55:44.480867616 +0000 UTC m=+0.065543935 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Dec 06 09:55:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:44 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:44 compute-0 ceph-mon[74327]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:55:44 compute-0 podman[207194]: 2025-12-06 09:55:44.686942612 +0000 UTC m=+0.052993535 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:44 compute-0 podman[207194]: 2025-12-06 09:55:44.745784854 +0000 UTC m=+0.111835787 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:44 compute-0 sudo[207301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvfcejmafvmbayrqudhrazjckctsxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014944.52473-1988-264159308931689/AnsiballZ_file.py'
Dec 06 09:55:44 compute-0 sudo[207301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:45 compute-0 podman[207328]: 2025-12-06 09:55:45.009567993 +0000 UTC m=+0.079548504 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:55:45 compute-0 python3.9[207314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:45 compute-0 sudo[207301]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:45 compute-0 podman[207328]: 2025-12-06 09:55:45.205424603 +0000 UTC m=+0.275405154 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 09:55:45 compute-0 sudo[207594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobunavweksvclrqblqbgjtpowtemigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014945.2634885-1988-13733999887075/AnsiballZ_file.py'
Dec 06 09:55:45 compute-0 sudo[207594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:45 compute-0 podman[207570]: 2025-12-06 09:55:45.625926002 +0000 UTC m=+0.056474220 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:45 compute-0 podman[207570]: 2025-12-06 09:55:45.662838581 +0000 UTC m=+0.093386799 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:55:45 compute-0 sudo[206153]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:55:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:55:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:45 compute-0 python3.9[207602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:45 compute-0 sudo[207631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:45 compute-0 sudo[207631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:45 compute-0 sudo[207631]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:45 compute-0 sudo[207594]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:45 compute-0 sudo[207656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:55:45 compute-0 sudo[207656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:45 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c5800a5c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:46.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:46 compute-0 sudo[207848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qktdinyfdgdtqkjoittcicghoyozbzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014945.9617198-1988-224832219584557/AnsiballZ_file.py'
Dec 06 09:55:46 compute-0 sudo[207848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:46 compute-0 sudo[207656]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:55:46 compute-0 python3.9[207852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:46 compute-0 sudo[207848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:46 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:46 compute-0 ceph-mon[74327]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:55:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:55:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:55:46 compute-0 sudo[207965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:46 compute-0 sudo[207965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:46 compute-0 sudo[207965]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:46 compute-0 sudo[207990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:55:46 compute-0 sudo[207990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:47.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:47 compute-0 sudo[208101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftygaxpzgrmuggakfvpojhmnmoahajrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014946.5771399-1988-250580035632892/AnsiballZ_file.py'
Dec 06 09:55:47 compute-0 sudo[208101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.333688296 +0000 UTC m=+0.046549710 container create e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:55:47 compute-0 systemd[1]: Started libpod-conmon-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope.
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.313587142 +0000 UTC m=+0.026448586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.427464034 +0000 UTC m=+0.140325478 container init e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.436766125 +0000 UTC m=+0.149627539 container start e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec 06 09:55:47 compute-0 python3.9[208107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.440031004 +0000 UTC m=+0.152892408 container attach e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:55:47 compute-0 nifty_lalande[208125]: 167 167
Dec 06 09:55:47 compute-0 systemd[1]: libpod-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope: Deactivated successfully.
Dec 06 09:55:47 compute-0 conmon[208125]: conmon e785b8b48e74ef7b237b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope/container/memory.events
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.444565127 +0000 UTC m=+0.157426551 container died e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec 06 09:55:47 compute-0 sudo[208101]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28ba16870bf5525c4a33c90562aa23dd9198028b5eb04beb59eeec75b0bbd4d-merged.mount: Deactivated successfully.
Dec 06 09:55:47 compute-0 podman[208108]: 2025-12-06 09:55:47.487409646 +0000 UTC m=+0.200271060 container remove e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:55:47 compute-0 systemd[1]: libpod-conmon-e785b8b48e74ef7b237b66aa3b6022a65228a5913c12e6f8573d382d23938600.scope: Deactivated successfully.
Dec 06 09:55:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:47 compute-0 podman[208198]: 2025-12-06 09:55:47.648413963 +0000 UTC m=+0.045134122 container create 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:55:47 compute-0 systemd[1]: Started libpod-conmon-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope.
Dec 06 09:55:47 compute-0 podman[208198]: 2025-12-06 09:55:47.629890182 +0000 UTC m=+0.026610361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:47 compute-0 podman[208198]: 2025-12-06 09:55:47.753078245 +0000 UTC m=+0.149798414 container init 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 09:55:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:55:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:55:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:55:47 compute-0 podman[208198]: 2025-12-06 09:55:47.760803345 +0000 UTC m=+0.157523504 container start 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 09:55:47 compute-0 podman[208198]: 2025-12-06 09:55:47.764633088 +0000 UTC m=+0.161353347 container attach 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 09:55:47 compute-0 sudo[208321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqksnfuvdzvpglgawrwzpvzesptzurlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014947.5949292-1988-192455280617183/AnsiballZ_file.py'
Dec 06 09:55:47 compute-0 sudo[208321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:47 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:48 compute-0 python3.9[208323]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:48 compute-0 busy_wilbur[208266]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:55:48 compute-0 busy_wilbur[208266]: --> All data devices are unavailable
Dec 06 09:55:48 compute-0 sudo[208321]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:48 compute-0 systemd[1]: libpod-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope: Deactivated successfully.
Dec 06 09:55:48 compute-0 podman[208198]: 2025-12-06 09:55:48.108031131 +0000 UTC m=+0.504751310 container died 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 09:55:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:48.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b89be88a57b722b00b1288f3dca62438dcc5ff595485c2be00232ad257a86d-merged.mount: Deactivated successfully.
Dec 06 09:55:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:48 compute-0 podman[208198]: 2025-12-06 09:55:48.154748515 +0000 UTC m=+0.551468674 container remove 50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:55:48 compute-0 systemd[1]: libpod-conmon-50d922ec3509b229c37f12132453ef88c5883ccdbbb0b01ae28f16331c8e8236.scope: Deactivated successfully.
Dec 06 09:55:48 compute-0 sudo[207990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:48 compute-0 sudo[208366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:48 compute-0 sudo[208366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:48 compute-0 sudo[208366]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:48 compute-0 sudo[208417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:55:48 compute-0 sudo[208417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:48 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:48 compute-0 sudo[208524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:55:48 compute-0 sudo[208580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzebzbnezlymztcngqkgooicaypyavae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014948.301292-1988-22862330299952/AnsiballZ_file.py'
Dec 06 09:55:48 compute-0 sudo[208524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:48 compute-0 sudo[208580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:48 compute-0 sudo[208524]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:48 compute-0 ceph-mon[74327]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:48 compute-0 python3.9[208591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.804838777 +0000 UTC m=+0.064442095 container create b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:55:48 compute-0 sudo[208580]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:48 compute-0 systemd[1]: Started libpod-conmon-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope.
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.770748314 +0000 UTC m=+0.030351712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.899205141 +0000 UTC m=+0.158808449 container init b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.905632984 +0000 UTC m=+0.165236282 container start b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.909068068 +0000 UTC m=+0.168671356 container attach b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:55:48 compute-0 frosty_wilbur[208630]: 167 167
Dec 06 09:55:48 compute-0 systemd[1]: libpod-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope: Deactivated successfully.
Dec 06 09:55:48 compute-0 conmon[208630]: conmon b5c887abc200c829b4f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope/container/memory.events
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.912223553 +0000 UTC m=+0.171826841 container died b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e7fe66a378f44485885686395de6d436271d6b0bcb4b7e48340f8e371191500-merged.mount: Deactivated successfully.
Dec 06 09:55:48 compute-0 podman[208613]: 2025-12-06 09:55:48.948410613 +0000 UTC m=+0.208013901 container remove b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_wilbur, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:55:48 compute-0 systemd[1]: libpod-conmon-b5c887abc200c829b4f318893d8530dd55acd23983a6bbb5bccbb22e47603ce9.scope: Deactivated successfully.
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.117302933 +0000 UTC m=+0.039695515 container create 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:55:49 compute-0 systemd[1]: Started libpod-conmon-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope.
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.099553753 +0000 UTC m=+0.021946355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.218529572 +0000 UTC m=+0.140922224 container init 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.235922853 +0000 UTC m=+0.158315425 container start 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.239441199 +0000 UTC m=+0.161833891 container attach 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 09:55:49 compute-0 sudo[208824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xannwsfjqkdctwqyvpwmiezgsapwgzsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014948.969446-1988-163785851285760/AnsiballZ_file.py'
Dec 06 09:55:49 compute-0 sudo[208824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:49 compute-0 python3.9[208826]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:49 compute-0 kind_driscoll[208770]: {
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:     "1": [
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:         {
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "devices": [
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "/dev/loop3"
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             ],
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "lv_name": "ceph_lv0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "lv_size": "21470642176",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "name": "ceph_lv0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "tags": {
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.cluster_name": "ceph",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.crush_device_class": "",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.encrypted": "0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.osd_id": "1",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.type": "block",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.vdo": "0",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:                 "ceph.with_tpm": "0"
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             },
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "type": "block",
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:             "vg_name": "ceph_vg0"
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:         }
Dec 06 09:55:49 compute-0 kind_driscoll[208770]:     ]
Dec 06 09:55:49 compute-0 kind_driscoll[208770]: }
Dec 06 09:55:49 compute-0 sudo[208824]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:49 compute-0 systemd[1]: libpod-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope: Deactivated successfully.
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.584187628 +0000 UTC m=+0.506580230 container died 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 09:55:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c860b10170991bcc73349f6db4860167a6fb43e510126a8c23e1a959416e64-merged.mount: Deactivated successfully.
Dec 06 09:55:49 compute-0 podman[208730]: 2025-12-06 09:55:49.638350453 +0000 UTC m=+0.560743075 container remove 19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:55:49 compute-0 systemd[1]: libpod-conmon-19ea4b45fbaaf68aef28c26fe3faf81b2b91040f70f2ef055335d907272a93ab.scope: Deactivated successfully.
Dec 06 09:55:49 compute-0 sudo[208417]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:49 compute-0 sudo[208877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:49 compute-0 sudo[208877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:49 compute-0 sudo[208877]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:49 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:49 compute-0 sudo[208927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:55:49 compute-0 sudo[208927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:50 compute-0 podman[208976]: 2025-12-06 09:55:50.078411142 +0000 UTC m=+0.151851371 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 09:55:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:50.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:50 compute-0 sudo[209084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mikbiaxcmscsxhduecgabltqnchpptqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014949.7822754-1988-13144950546946/AnsiballZ_file.py'
Dec 06 09:55:50 compute-0 sudo[209084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:50 compute-0 python3.9[209088]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.42458784 +0000 UTC m=+0.061339811 container create f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:55:50 compute-0 sudo[209084]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:50 compute-0 systemd[1]: Started libpod-conmon-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope.
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.394183597 +0000 UTC m=+0.030935618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.532009637 +0000 UTC m=+0.168761648 container init f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.543004244 +0000 UTC m=+0.179756215 container start f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.54653907 +0000 UTC m=+0.183291071 container attach f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 09:55:50 compute-0 bold_hawking[209133]: 167 167
Dec 06 09:55:50 compute-0 systemd[1]: libpod-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope: Deactivated successfully.
Dec 06 09:55:50 compute-0 conmon[209133]: conmon f8c5341bee29cc6f7805 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope/container/memory.events
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.550113936 +0000 UTC m=+0.186865957 container died f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 09:55:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:50 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad6aae2a2412f931b92774cc3ecad3b5f1295c15bf41f7bebeb7f838e5f4e8f-merged.mount: Deactivated successfully.
Dec 06 09:55:50 compute-0 podman[209114]: 2025-12-06 09:55:50.597109978 +0000 UTC m=+0.233861949 container remove f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:55:50 compute-0 systemd[1]: libpod-conmon-f8c5341bee29cc6f7805d94078466f6a720b62177cc9ddbcf43c7008c3594845.scope: Deactivated successfully.
Dec 06 09:55:50 compute-0 podman[209253]: 2025-12-06 09:55:50.799725582 +0000 UTC m=+0.053384267 container create 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 09:55:50 compute-0 systemd[1]: Started libpod-conmon-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope.
Dec 06 09:55:50 compute-0 podman[209253]: 2025-12-06 09:55:50.77602073 +0000 UTC m=+0.029679495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:55:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:55:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:55:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:50 compute-0 podman[209253]: 2025-12-06 09:55:50.922893124 +0000 UTC m=+0.176551829 container init 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:55:50 compute-0 podman[209253]: 2025-12-06 09:55:50.931010313 +0000 UTC m=+0.184668998 container start 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:55:50 compute-0 podman[209253]: 2025-12-06 09:55:50.935433254 +0000 UTC m=+0.189091949 container attach 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:55:50 compute-0 sudo[209324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgitjmhwwzuppzgnixhaigkoccrvroi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014950.5959485-1988-308407086549/AnsiballZ_file.py'
Dec 06 09:55:50 compute-0 sudo[209324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:51 compute-0 python3.9[209327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:51 compute-0 ceph-mon[74327]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:51 compute-0 sudo[209324]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:51 compute-0 lvm[209499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:55:51 compute-0 lvm[209499]: VG ceph_vg0 finished
Dec 06 09:55:51 compute-0 condescending_lumiere[209294]: {}
Dec 06 09:55:51 compute-0 systemd[1]: libpod-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Deactivated successfully.
Dec 06 09:55:51 compute-0 systemd[1]: libpod-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Consumed 1.337s CPU time.
Dec 06 09:55:51 compute-0 podman[209253]: 2025-12-06 09:55:51.742591536 +0000 UTC m=+0.996250281 container died 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 09:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff0393fc3e5d3a9a10599976385b6d48b9766a0dff86894ded665167ccccf35-merged.mount: Deactivated successfully.
Dec 06 09:55:51 compute-0 sudo[209565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdcmqxzkxxdejygerqdeljpgzezuqddi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014951.448082-2285-90882704705739/AnsiballZ_stat.py'
Dec 06 09:55:51 compute-0 sudo[209565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:51 compute-0 podman[209253]: 2025-12-06 09:55:51.807655797 +0000 UTC m=+1.061314472 container remove 5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_lumiere, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:55:51 compute-0 systemd[1]: libpod-conmon-5ce3f445408b5b23438716131e534b218c3952bb5d61eb795f7e4c769f6d2871.scope: Deactivated successfully.
Dec 06 09:55:51 compute-0 sudo[208927]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:51 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:55:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:55:51 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:51 compute-0 sudo[209568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:55:51 compute-0 sudo[209568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:55:51 compute-0 sudo[209568]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:52 compute-0 python3.9[209567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:52 compute-0 sudo[209565]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:52.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:52 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:52 compute-0 sudo[209713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmcidobuigpjzwmwbzadgftbstqwneh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014951.448082-2285-90882704705739/AnsiballZ_copy.py'
Dec 06 09:55:52 compute-0 sudo[209713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:52 compute-0 python3.9[209715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014951.448082-2285-90882704705739/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:52 compute-0 sudo[209713]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:55:52 compute-0 ceph-mon[74327]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:53 compute-0 sudo[209867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notiplswxilrvolhenshkufdkavebbma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014953.0735557-2285-248124535653868/AnsiballZ_stat.py'
Dec 06 09:55:53 compute-0 sudo[209867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:53 compute-0 python3.9[209869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:53 compute-0 sudo[209867]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:53 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:55:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:55:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:55:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:55:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:54.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:54 compute-0 sudo[209990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fahcqnovfsazrgwceqfhhtfqjwzkaskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014953.0735557-2285-248124535653868/AnsiballZ_copy.py'
Dec 06 09:55:54 compute-0 sudo[209990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.226 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:55:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.226 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:55:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:55:54.227 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:55:54 compute-0 python3.9[209992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014953.0735557-2285-248124535653868/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:54 compute-0 sudo[209990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:54 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:55:54 compute-0 ceph-mon[74327]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:55:55 compute-0 sudo[210143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvufojojurnwcvlztojnyyxppimbllsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014954.6756606-2285-262223880239663/AnsiballZ_stat.py'
Dec 06 09:55:55 compute-0 sudo[210143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:55 compute-0 python3.9[210145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:55 compute-0 sudo[210143]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:55 compute-0 sudo[210267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbdkzjmbbjnjmuhjzyrctwommhysfam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014954.6756606-2285-262223880239663/AnsiballZ_copy.py'
Dec 06 09:55:55 compute-0 sudo[210267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:55 compute-0 python3.9[210269]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014954.6756606-2285-262223880239663/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:55 compute-0 sudo[210267]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:55 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c2c002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:55:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:56.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:55:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:55:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:55:56 compute-0 sudo[210419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjmtgbqdzqmdypzrekjxthvwdwxzpwyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014955.9772198-2285-215934208566610/AnsiballZ_stat.py'
Dec 06 09:55:56 compute-0 sudo[210419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:56 compute-0 python3.9[210421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:56 compute-0 sudo[210419]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:56 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:56 compute-0 sudo[210542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyrecmkgfqrwzswzpawmiogzoqmskzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014955.9772198-2285-215934208566610/AnsiballZ_copy.py'
Dec 06 09:55:56 compute-0 sudo[210542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:55:57.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:55:57 compute-0 python3.9[210544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014955.9772198-2285-215934208566610/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:57 compute-0 sudo[210542]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:57 compute-0 ceph-mon[74327]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:55:57 compute-0 podman[210584]: 2025-12-06 09:55:57.44684255 +0000 UTC m=+0.067532559 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 06 09:55:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:57 compute-0 sudo[210715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atnfpfnuzhidjyqpbtixbhsrrhnlwitg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014957.375574-2285-211683571312907/AnsiballZ_stat.py'
Dec 06 09:55:57 compute-0 sudo[210715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:57 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:57 compute-0 python3.9[210717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:57 compute-0 sudo[210715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:55:58.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:55:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:55:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:55:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:55:58 compute-0 sudo[210838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmhvfdtjtirhbzkkujrraginxnmmiwat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014957.375574-2285-211683571312907/AnsiballZ_copy.py'
Dec 06 09:55:58 compute-0 sudo[210838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:58 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:58 compute-0 python3.9[210840]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014957.375574-2285-211683571312907/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:58 compute-0 sudo[210838]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:59 compute-0 sudo[210991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigtwnweefxapejhinvpzgchxzgocice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014958.7491493-2285-202535243719202/AnsiballZ_stat.py'
Dec 06 09:55:59 compute-0 sudo[210991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:59 compute-0 ceph-mon[74327]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:55:59 compute-0 python3.9[210993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:59 compute-0 sudo[210991]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:59 compute-0 sudo[211115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meerjbpysgbypmgqyplepvduubgysliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014958.7491493-2285-202535243719202/AnsiballZ_copy.py'
Dec 06 09:55:59 compute-0 sudo[211115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:55:59 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c500041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:55:59 compute-0 python3.9[211117]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014958.7491493-2285-202535243719202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:59 compute-0 sudo[211115]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:00.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:00 compute-0 sudo[211267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfspsmxaeqjdkapbkiiqfsxsevifrtpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014960.1505096-2285-96024825075782/AnsiballZ_stat.py'
Dec 06 09:56:00 compute-0 sudo[211267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[170877]: 06/12/2025 09:56:00 : epoch 6933fd1d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440047c0 fd 48 proxy ignored for local
Dec 06 09:56:00 compute-0 kernel: ganesha.nfsd[207909]: segfault at 50 ip 00007f1d03ebf32e sp 00007f1ccd7f9210 error 4 in libntirpc.so.5.8[7f1d03ea4000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 06 09:56:00 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 09:56:00 compute-0 systemd[1]: Started Process Core Dump (PID 211270/UID 0).
Dec 06 09:56:00 compute-0 python3.9[211269]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:00 compute-0 sudo[211267]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:01 compute-0 sudo[211393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdmizeacginfnvgtkpselbksarwsxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014960.1505096-2285-96024825075782/AnsiballZ_copy.py'
Dec 06 09:56:01 compute-0 sudo[211393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:01 compute-0 ceph-mon[74327]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:01 compute-0 python3.9[211395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014960.1505096-2285-96024825075782/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:01 compute-0 sudo[211393]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:01 compute-0 sudo[211546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbihyxyvxasyouwrpyridnuzbsxvwfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014961.4693465-2285-23129420167657/AnsiballZ_stat.py'
Dec 06 09:56:01 compute-0 sudo[211546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:01 compute-0 python3.9[211548]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:01 compute-0 sudo[211546]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:02 compute-0 systemd-coredump[211271]: Process 170881 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 65:
                                                    #0  0x00007f1d03ebf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 09:56:02 compute-0 systemd[1]: systemd-coredump@5-211270-0.service: Deactivated successfully.
Dec 06 09:56:02 compute-0 systemd[1]: systemd-coredump@5-211270-0.service: Consumed 1.433s CPU time.
Dec 06 09:56:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:02 compute-0 podman[211618]: 2025-12-06 09:56:02.203309055 +0000 UTC m=+0.044013081 container died c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7f99abc96062c26417ae3d5e3044f6541c1d626500d6a12b4f0ec41d1199e93-merged.mount: Deactivated successfully.
Dec 06 09:56:02 compute-0 podman[211618]: 2025-12-06 09:56:02.248690393 +0000 UTC m=+0.089394349 container remove c3b0a1339520eec10382627c7e3dcec6ee5222c80f6eb2808f2db40456331732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:56:02 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:56:02 compute-0 sudo[211703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqrgglssbrddxmudnzhofrqpqierwyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014961.4693465-2285-23129420167657/AnsiballZ_copy.py'
Dec 06 09:56:02 compute-0 sudo[211703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:02 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 09:56:02 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.212s CPU time.
Dec 06 09:56:02 compute-0 python3.9[211717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014961.4693465-2285-23129420167657/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:02 compute-0 sudo[211703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:03 compute-0 sudo[211869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flcviypiqbjzullpiaspwdfwntpugadq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014962.7917209-2285-119965784766259/AnsiballZ_stat.py'
Dec 06 09:56:03 compute-0 sudo[211869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:03 compute-0 ceph-mon[74327]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:03 compute-0 python3.9[211871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:03 compute-0 sudo[211869]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:03 compute-0 sudo[211993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhmelwuuurymmzayyyhzdhwyjqoflose ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014962.7917209-2285-119965784766259/AnsiballZ_copy.py'
Dec 06 09:56:03 compute-0 sudo[211993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:03 compute-0 python3.9[211995]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014962.7917209-2285-119965784766259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:03 compute-0 sudo[211993]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:56:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:04.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:04.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:04 compute-0 sudo[212145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxyagppdvwwbozujecyaccnlfffsuci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014964.1399703-2285-3098508630762/AnsiballZ_stat.py'
Dec 06 09:56:04 compute-0 sudo[212145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:04 compute-0 python3.9[212147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:04 compute-0 sudo[212145]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:04 compute-0 sudo[212268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikfutnthidbxtetbnbkitgxthewftqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014964.1399703-2285-3098508630762/AnsiballZ_copy.py'
Dec 06 09:56:04 compute-0 sudo[212268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:05 compute-0 python3.9[212270]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014964.1399703-2285-3098508630762/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:05 compute-0 sudo[212268]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:05 compute-0 ceph-mon[74327]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:56:05 compute-0 sudo[212422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygegfoodbmuajgdvedykbvxnnlcodkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014965.346088-2285-244284179857864/AnsiballZ_stat.py'
Dec 06 09:56:05 compute-0 sudo[212422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:05 compute-0 python3.9[212424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:05 compute-0 sudo[212422]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:06 compute-0 sudo[212545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmqlojcosahtznqaigrxszcjiouxuuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014965.346088-2285-244284179857864/AnsiballZ_copy.py'
Dec 06 09:56:06 compute-0 sudo[212545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:06 compute-0 python3.9[212547]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014965.346088-2285-244284179857864/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:06 compute-0 sudo[212545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095606 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:56:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:07.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:56:07 compute-0 sudo[212698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhipepxmnmlknqccbwbkymocdcaeqpbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014966.7370355-2285-272146555173941/AnsiballZ_stat.py'
Dec 06 09:56:07 compute-0 sudo[212698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:07 compute-0 ceph-mon[74327]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:07 compute-0 python3.9[212700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:07 compute-0 sudo[212698]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:07 compute-0 sudo[212822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huygohicyihxfptrocihmiumschdciav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014966.7370355-2285-272146555173941/AnsiballZ_copy.py'
Dec 06 09:56:07 compute-0 sudo[212822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:07 compute-0 python3.9[212824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014966.7370355-2285-272146555173941/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:07 compute-0 sudo[212822]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:56:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:08 compute-0 sudo[212974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lytocdmyybznluxctjtinffihxvfuxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014968.1834977-2285-42969441720613/AnsiballZ_stat.py'
Dec 06 09:56:08 compute-0 sudo[212974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:08 compute-0 sudo[212977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:56:08 compute-0 sudo[212977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:08 compute-0 sudo[212977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:08 compute-0 python3.9[212976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:08 compute-0 sudo[212974]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:56:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:09 compute-0 sudo[213123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntglbwtmaizfiochclmehxrettnqhwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014968.1834977-2285-42969441720613/AnsiballZ_copy.py'
Dec 06 09:56:09 compute-0 sudo[213123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:09 compute-0 ceph-mon[74327]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:56:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:09 compute-0 python3.9[213125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014968.1834977-2285-42969441720613/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:09 compute-0 sudo[213123]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:09 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 06 09:56:09 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:09.997821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:56:09 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 06 09:56:09 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014969997903, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3929, "num_deletes": 501, "total_data_size": 7898903, "memory_usage": 8015064, "flush_reason": "Manual Compaction"}
Dec 06 09:56:09 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970053250, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4439644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13301, "largest_seqno": 17229, "table_properties": {"data_size": 4428209, "index_size": 6457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 30991, "raw_average_key_size": 19, "raw_value_size": 4401079, "raw_average_value_size": 2824, "num_data_blocks": 282, "num_entries": 1558, "num_filter_entries": 1558, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014557, "oldest_key_time": 1765014557, "file_creation_time": 1765014969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 55514 microseconds, and 18570 cpu microseconds.
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.053338) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4439644 bytes OK
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.053375) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055306) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055328) EVENT_LOG_v1 {"time_micros": 1765014970055321, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.055356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7883055, prev total WAL file size 7883055, number of live WAL files 2.
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.058801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4335KB)], [32(13MB)]
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970058920, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18464469, "oldest_snapshot_seqno": -1}
Dec 06 09:56:10 compute-0 sudo[213276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaqywfyltbwrrlwtxhowmuokrkpmgajy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014969.7462847-2285-116831586194896/AnsiballZ_stat.py'
Dec 06 09:56:10 compute-0 sudo[213276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:10.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:10.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5028 keys, 13936017 bytes, temperature: kUnknown
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970220570, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13936017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13900388, "index_size": 21951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125831, "raw_average_key_size": 25, "raw_value_size": 13807284, "raw_average_value_size": 2746, "num_data_blocks": 917, "num_entries": 5028, "num_filter_entries": 5028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765014970, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.220887) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13936017 bytes
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.222938) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.1 rd, 86.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.2, 13.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.3) write-amplify(3.1) OK, records in: 5849, records dropped: 821 output_compression: NoCompression
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.222956) EVENT_LOG_v1 {"time_micros": 1765014970222947, "job": 14, "event": "compaction_finished", "compaction_time_micros": 161757, "compaction_time_cpu_micros": 44409, "output_level": 6, "num_output_files": 1, "total_output_size": 13936017, "num_input_records": 5849, "num_output_records": 5028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970223704, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765014970226063, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.058610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:10.226537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:10 compute-0 python3.9[213278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:10 compute-0 sudo[213276]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:10 compute-0 sudo[213399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knxskembncjdrpgrttioqypovrnnubqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014969.7462847-2285-116831586194896/AnsiballZ_copy.py'
Dec 06 09:56:10 compute-0 sudo[213399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:56:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:56:10 compute-0 python3.9[213401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014969.7462847-2285-116831586194896/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:10 compute-0 sudo[213399]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:11 compute-0 ceph-mon[74327]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:11 compute-0 python3.9[213553]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:12 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 6.
Dec 06 09:56:12 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:56:12 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.212s CPU time.
Dec 06 09:56:12 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 09:56:12 compute-0 sudo[213720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opgggeakhclisioatkskuigjroeybfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014972.0129638-2903-227221872514532/AnsiballZ_seboolean.py'
Dec 06 09:56:12 compute-0 sudo[213720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:12 compute-0 podman[213758]: 2025-12-06 09:56:12.697014947 +0000 UTC m=+0.050555596 container create 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:56:12 compute-0 python3.9[213723]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 06 09:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:12 compute-0 podman[213758]: 2025-12-06 09:56:12.675263539 +0000 UTC m=+0.028804238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:12 compute-0 podman[213758]: 2025-12-06 09:56:12.787810537 +0000 UTC m=+0.141351216 container init 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 09:56:12 compute-0 podman[213758]: 2025-12-06 09:56:12.79312361 +0000 UTC m=+0.146664269 container start 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:56:12 compute-0 bash[213758]: 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765
Dec 06 09:56:12 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 09:56:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:56:13 compute-0 ceph-mon[74327]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:13 compute-0 sudo[213720]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:56:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:14 compute-0 sudo[213972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohadkyaetqpqdwhncghkrjkdzseclqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014974.2608898-2927-50026710614822/AnsiballZ_copy.py'
Dec 06 09:56:14 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 06 09:56:14 compute-0 sudo[213972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:14 compute-0 python3.9[213974]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:14 compute-0 sudo[213972]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:15 compute-0 ceph-mon[74327]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 09:56:15 compute-0 sudo[214125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjadclfmsjyezecqehuethxqibdryivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014974.9824104-2927-160022564051944/AnsiballZ_copy.py'
Dec 06 09:56:15 compute-0 sudo[214125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:15 compute-0 python3.9[214127]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:15 compute-0 sudo[214125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:16 compute-0 sudo[214278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrutjfzwgdwvksndvzhknuyhoegkqqoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014975.7721715-2927-264483253138307/AnsiballZ_copy.py'
Dec 06 09:56:16 compute-0 sudo[214278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:56:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:16 compute-0 python3.9[214280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:16 compute-0 sudo[214278]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:16 compute-0 sudo[214430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmgqqbrjgvctvdfupmksrizmziypmios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014976.5180576-2927-189505034549196/AnsiballZ_copy.py'
Dec 06 09:56:16 compute-0 sudo[214430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:17 compute-0 python3.9[214432]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:17.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:56:17 compute-0 sudo[214430]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:17 compute-0 ceph-mon[74327]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:56:17 compute-0 sudo[214584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjeylkeuafwwtxxagnaxjcegussrdtlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014977.269747-2927-133540030608167/AnsiballZ_copy.py'
Dec 06 09:56:17 compute-0 sudo[214584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:17 compute-0 python3.9[214586]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:17 compute-0 sudo[214584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 09:56:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 09:56:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:18 compute-0 sudo[214736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphxxceyfmuncalqgtcjjzcxezbzvhko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014978.0558023-3035-76650934292195/AnsiballZ_copy.py'
Dec 06 09:56:18 compute-0 sudo[214736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:18 compute-0 python3.9[214738]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:18 compute-0 sudo[214736]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:56:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:56:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:56:19 compute-0 sudo[214889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suczdqakvzmddhyklbmhutrjffcyklvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014978.8782332-3035-65604013854213/AnsiballZ_copy.py'
Dec 06 09:56:19 compute-0 sudo[214889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:19 compute-0 ceph-mon[74327]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:19 compute-0 python3.9[214891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:19 compute-0 sudo[214889]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:20 compute-0 sudo[215042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncykvmpflljlrtzvikdgppandehdcmbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014979.6652923-3035-203220455238917/AnsiballZ_copy.py'
Dec 06 09:56:20 compute-0 sudo[215042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:20.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:20 compute-0 python3.9[215044]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:20 compute-0 sudo[215042]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:20 compute-0 podman[215045]: 2025-12-06 09:56:20.478654642 +0000 UTC m=+0.152569308 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 09:56:20 compute-0 sudo[215220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckyeetonxzmhocaluhyzdmcesmcbxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014980.5060065-3035-253513697240197/AnsiballZ_copy.py'
Dec 06 09:56:20 compute-0 sudo[215220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:56:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 09:56:21 compute-0 python3.9[215222]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:21 compute-0 sudo[215220]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:21 compute-0 ceph-mon[74327]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:21 compute-0 sudo[215374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzetijsvtagrzmhzkgwawwortpzmsak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014981.2005386-3035-194982217452039/AnsiballZ_copy.py'
Dec 06 09:56:21 compute-0 sudo[215374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:21 compute-0 python3.9[215376]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:21 compute-0 sudo[215374]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095622 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:56:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 09:56:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 09:56:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:22.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:22 compute-0 sudo[215526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewctklybntfwslhxyudsjjyheqbqrcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014981.9134393-3143-242749104465794/AnsiballZ_systemd.py'
Dec 06 09:56:22 compute-0 sudo[215526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:22 compute-0 python3.9[215528]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:56:22 compute-0 systemd[1]: Reloading.
Dec 06 09:56:22 compute-0 systemd-sysv-generator[215558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:22 compute-0 systemd-rc-local-generator[215555]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:22 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 06 09:56:22 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 06 09:56:22 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 06 09:56:22 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 06 09:56:22 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 06 09:56:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:56:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:56:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:56:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 09:56:23 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 06 09:56:23 compute-0 sudo[215526]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:23 compute-0 ceph-mon[74327]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:23 compute-0 sudo[215721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfdrqvsgykiffnqzadojyteeorplphn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014983.3038008-3143-137450373016057/AnsiballZ_systemd.py'
Dec 06 09:56:23 compute-0 sudo[215721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:56:23
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta']
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:56:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:56:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:23 compute-0 python3.9[215723]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:56:23 compute-0 systemd[1]: Reloading.
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:24 compute-0 systemd-rc-local-generator[215752]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:24 compute-0 systemd-sysv-generator[215756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:24.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:24 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 06 09:56:24 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 06 09:56:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 06 09:56:24 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 06 09:56:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:24 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 06 09:56:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 06 09:56:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 06 09:56:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:56:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:56:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 06 09:56:24 compute-0 sudo[215721]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:24 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 06 09:56:24 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 06 09:56:24 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 06 09:56:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:25 compute-0 sudo[215945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wootpnctmrznwuetwjqgiczjlfxhtpsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014984.6573856-3143-1881056364918/AnsiballZ_systemd.py'
Dec 06 09:56:25 compute-0 sudo[215945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:25 compute-0 python3.9[215948]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:56:25 compute-0 systemd[1]: Reloading.
Dec 06 09:56:25 compute-0 systemd-rc-local-generator[215977]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:25 compute-0 systemd-sysv-generator[215980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:25 compute-0 ceph-mon[74327]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:25 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 06 09:56:25 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 06 09:56:25 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 06 09:56:25 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 06 09:56:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 06 09:56:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 06 09:56:25 compute-0 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 74c981a9-28dd-4b72-bb89-6fef8458e1c1
Dec 06 09:56:25 compute-0 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 09:56:25 compute-0 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 74c981a9-28dd-4b72-bb89-6fef8458e1c1
Dec 06 09:56:25 compute-0 setroubleshoot[215760]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 09:56:25 compute-0 sudo[215945]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:26.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:26.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:26 compute-0 sudo[216161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfuwonkrpmbrycijnukpimdronvmzbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014986.018433-3143-115749798840659/AnsiballZ_systemd.py'
Dec 06 09:56:26 compute-0 sudo[216161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:26 compute-0 python3.9[216163]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:56:26 compute-0 systemd[1]: Reloading.
Dec 06 09:56:26 compute-0 systemd-rc-local-generator[216188]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:26 compute-0 systemd-sysv-generator[216191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:26 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 06 09:56:26 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 06 09:56:26 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 06 09:56:26 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 06 09:56:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:56:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:56:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:56:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 06 09:56:27 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 06 09:56:27 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 06 09:56:27 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 06 09:56:27 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 06 09:56:27 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 06 09:56:27 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 09:56:27 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 06 09:56:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:27.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:56:27 compute-0 sudo[216161]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:27 compute-0 sudo[216386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imhftwegnzdvhzrkbipyvusifygukthm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014987.2402925-3143-144320943252995/AnsiballZ_systemd.py'
Dec 06 09:56:27 compute-0 sudo[216386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:27 compute-0 podman[216351]: 2025-12-06 09:56:27.628751186 +0000 UTC m=+0.078667924 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 09:56:27 compute-0 ceph-mon[74327]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 09:56:27 compute-0 python3.9[216390]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:56:27 compute-0 systemd[1]: Reloading.
Dec 06 09:56:28 compute-0 systemd-sysv-generator[216423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:28 compute-0 systemd-rc-local-generator[216417]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:28 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 06 09:56:28 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 06 09:56:28 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 06 09:56:28 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 06 09:56:28 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 06 09:56:28 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 06 09:56:28 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 06 09:56:28 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 06 09:56:28 compute-0 sudo[216386]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:28 compute-0 sudo[216479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:56:28 compute-0 sudo[216479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:28 compute-0 sudo[216479]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:29 compute-0 sudo[216630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qziwljfstgtkkeudhiqfmkljrpkmjdkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014988.7794847-3254-198170043134278/AnsiballZ_file.py'
Dec 06 09:56:29 compute-0 sudo[216630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:29 compute-0 python3.9[216632]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:29 compute-0 sudo[216630]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:29 compute-0 ceph-mon[74327]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:29 compute-0 sudo[216783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehrfxpccgwydbbxtuhepmxxirnerqbhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014989.5885906-3278-33471793213441/AnsiballZ_find.py'
Dec 06 09:56:29 compute-0 sudo[216783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:30 compute-0 python3.9[216785]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:56:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec 06 09:56:30 compute-0 sudo[216783]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:30.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:30 compute-0 ceph-mon[74327]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec 06 09:56:30 compute-0 sudo[216935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ophwnjijslqnojvwdoepislbutmwaimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014990.5354073-3302-1322089206922/AnsiballZ_command.py'
Dec 06 09:56:30 compute-0 sudo[216935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:56:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:56:31 compute-0 python3.9[216937]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:31 compute-0 sudo[216935]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:31 compute-0 python3.9[217093]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:56:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec 06 09:56:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:32.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:32 compute-0 python3.9[217243]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:56:33 compute-0 ceph-mon[74327]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Dec 06 09:56:33 compute-0 python3.9[217378]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014992.3738215-3359-269067924263915/.source.xml follow=False _original_basename=secret.xml.j2 checksum=f7c948a7651e1e704e9fb6c67bea136c2b7876ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:34 compute-0 sudo[217532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qflswwhskvplgjrnjjinuniecvmrxldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014993.7051656-3404-173353251503792/AnsiballZ_command.py'
Dec 06 09:56:34 compute-0 sudo[217532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:56:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:34 compute-0 python3.9[217534]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 5ecd3f74-dade-5fc4-92ce-8950ae424258
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:34 compute-0 polkitd[43373]: Registered Authentication Agent for unix-process:217536:336478 (system bus name :1.2783 [pkttyagent --process 217536 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 09:56:34 compute-0 polkitd[43373]: Unregistered Authentication Agent for unix-process:217536:336478 (system bus name :1.2783, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 09:56:34 compute-0 polkitd[43373]: Registered Authentication Agent for unix-process:217535:336478 (system bus name :1.2784 [pkttyagent --process 217535 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 09:56:34 compute-0 polkitd[43373]: Unregistered Authentication Agent for unix-process:217535:336478 (system bus name :1.2784, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 09:56:34 compute-0 sudo[217532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:35 compute-0 python3.9[217696]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:35 compute-0 ceph-mon[74327]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:56:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:35 compute-0 sudo[217848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkoygpmyahpjprdyppbgnrpzewfaqns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014995.36054-3452-41857302231400/AnsiballZ_command.py'
Dec 06 09:56:35 compute-0 sudo[217848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 06 09:56:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.041s CPU time.
Dec 06 09:56:35 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 06 09:56:36 compute-0 sudo[217848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:56:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:56:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:56:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095636 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:56:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:36 compute-0 sudo[218001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvmuselkwizkzipyqckgzgbmvkzbulr ; FSID=5ecd3f74-dade-5fc4-92ce-8950ae424258 KEY=AQA7+TNpAAAAABAABZDZy1tS5Qay3mTps8dAWg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014996.2201443-3476-4165014044579/AnsiballZ_command.py'
Dec 06 09:56:36 compute-0 sudo[218001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:36 compute-0 polkitd[43373]: Registered Authentication Agent for unix-process:218004:336736 (system bus name :1.2787 [pkttyagent --process 218004 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 09:56:36 compute-0 polkitd[43373]: Unregistered Authentication Agent for unix-process:218004:336736 (system bus name :1.2787, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 09:56:36 compute-0 sudo[218001]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:37.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:56:37 compute-0 ceph-mon[74327]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 09:56:37 compute-0 sudo[218160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpqsudqanwvcfsliosahhbxjulyhbsyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014997.0679216-3500-38050598283298/AnsiballZ_copy.py'
Dec 06 09:56:37 compute-0 sudo[218160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:37 compute-0 python3.9[218162]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:37 compute-0 sudo[218160]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:38 compute-0 sudo[218313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snigcdznegdkgxwgimfuhkhbhwdjpfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014997.829946-3524-90202119480329/AnsiballZ_stat.py'
Dec 06 09:56:38 compute-0 sudo[218313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 09:56:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:38.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:38.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:38 compute-0 python3.9[218315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:38 compute-0 sudo[218313]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:38 compute-0 sudo[218436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgmvljlowvbhaabfioyvxdamoltwiio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014997.829946-3524-90202119480329/AnsiballZ_copy.py'
Dec 06 09:56:38 compute-0 sudo[218436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:38 compute-0 python3.9[218438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014997.829946-3524-90202119480329/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:56:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:38 compute-0 sudo[218436]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:56:39 compute-0 ceph-mon[74327]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 09:56:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:39 compute-0 sudo[218590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtewkndkjzsacorlrqftgmtxeovrlnvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014999.4628932-3572-115029388633597/AnsiballZ_file.py'
Dec 06 09:56:39 compute-0 sudo[218590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724001b20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:39 compute-0 python3.9[218592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:39 compute-0 sudo[218590]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:40.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:40 compute-0 sudo[218742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxulwubnpqqhrkhkjxqsjujhumlrylq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015000.1951857-3596-25956360843894/AnsiballZ_stat.py'
Dec 06 09:56:40 compute-0 sudo[218742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:40 compute-0 python3.9[218744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:40 compute-0 sudo[218742]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:40 compute-0 sudo[218820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gifcrmfddyekhbebhdrgxwjvdqjqxvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015000.1951857-3596-25956360843894/AnsiballZ_file.py'
Dec 06 09:56:40 compute-0 sudo[218820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:41 compute-0 python3.9[218822]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:41 compute-0 sudo[218820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:41 compute-0 ceph-mon[74327]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:41 compute-0 sudo[218974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftmlkpigvoipqdxpkmjuktvtelimrgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015001.4437864-3632-143144681229747/AnsiballZ_stat.py'
Dec 06 09:56:41 compute-0 sudo[218974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:41 compute-0 python3.9[218976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:41 compute-0 sudo[218974]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:42 compute-0 sudo[219052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmxjdpyzftrsdvwiqbjovrsvhwoiifap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015001.4437864-3632-143144681229747/AnsiballZ_file.py'
Dec 06 09:56:42 compute-0 sudo[219052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095642 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:56:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:42 compute-0 python3.9[219054]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.eah_39xm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:42 compute-0 sudo[219052]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724001b20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:43 compute-0 sudo[219204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxuqglrxkiqhvtlpsbvazjmtdmlatfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015002.7480788-3668-163999944889425/AnsiballZ_stat.py'
Dec 06 09:56:43 compute-0 sudo[219204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:43 compute-0 python3.9[219207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:43 compute-0 sudo[219204]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:43 compute-0 ceph-mon[74327]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.369899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003369985, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 509, "num_deletes": 251, "total_data_size": 595705, "memory_usage": 604872, "flush_reason": "Manual Compaction"}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003375433, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 589687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17231, "largest_seqno": 17738, "table_properties": {"data_size": 586862, "index_size": 861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6542, "raw_average_key_size": 18, "raw_value_size": 581332, "raw_average_value_size": 1665, "num_data_blocks": 39, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765014970, "oldest_key_time": 1765014970, "file_creation_time": 1765015003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5611 microseconds, and 2650 cpu microseconds.
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.375522) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 589687 bytes OK
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.375546) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379065) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379080) EVENT_LOG_v1 {"time_micros": 1765015003379074, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 592839, prev total WAL file size 592839, number of live WAL files 2.
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(575KB)], [35(13MB)]
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003379687, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14525704, "oldest_snapshot_seqno": -1}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4867 keys, 12333955 bytes, temperature: kUnknown
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003510469, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12333955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12300683, "index_size": 19978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123119, "raw_average_key_size": 25, "raw_value_size": 12211574, "raw_average_value_size": 2509, "num_data_blocks": 830, "num_entries": 4867, "num_filter_entries": 4867, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.510780) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12333955 bytes
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.512516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(45.5) write-amplify(20.9) OK, records in: 5377, records dropped: 510 output_compression: NoCompression
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.512541) EVENT_LOG_v1 {"time_micros": 1765015003512528, "job": 16, "event": "compaction_finished", "compaction_time_micros": 130865, "compaction_time_cpu_micros": 47161, "output_level": 6, "num_output_files": 1, "total_output_size": 12333955, "num_input_records": 5377, "num_output_records": 4867, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003512761, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015003515404, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.379429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:56:43.515565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:56:43 compute-0 sudo[219284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loxkxqlbscklzzllftkcrvffseiamkki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015002.7480788-3668-163999944889425/AnsiballZ_file.py'
Dec 06 09:56:43 compute-0 sudo[219284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:43 compute-0 python3.9[219286]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:43 compute-0 sudo[219284]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:44.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:44 compute-0 sudo[219436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrjwxajmdrztoyvsowvcqcyrlxcfhcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015004.121901-3707-76257900339770/AnsiballZ_command.py'
Dec 06 09:56:44 compute-0 sudo[219436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:44 compute-0 python3.9[219438]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:44 compute-0 sudo[219436]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:45 compute-0 sudo[219590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjfrjhqpoafsfdlodhbqwmpnjjcbcgw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015004.9254575-3731-99321966255784/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 09:56:45 compute-0 sudo[219590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:45 compute-0 ceph-mon[74327]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:56:45 compute-0 python3[219592]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:56:45 compute-0 sudo[219590]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:46 compute-0 sudo[219743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebfuycaadcbhjgvkacehlfbjczggefj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015005.8130574-3755-214371809502263/AnsiballZ_stat.py'
Dec 06 09:56:46 compute-0 sudo[219743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:46.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:46 compute-0 python3.9[219745]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:46 compute-0 sudo[219743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:46 compute-0 sudo[219821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fghawwqobirfkhrhhqlrubvimyntxifq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015005.8130574-3755-214371809502263/AnsiballZ_file.py'
Dec 06 09:56:46 compute-0 sudo[219821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:46 compute-0 python3.9[219823]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:46 compute-0 sudo[219821]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:56:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:56:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:47.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:56:47 compute-0 ceph-mon[74327]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:48.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:48 compute-0 sudo[219975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxorheciskkazxevwletjtgdpdylwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015008.1489303-3791-271911048942703/AnsiballZ_stat.py'
Dec 06 09:56:48 compute-0 sudo[219975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:48 compute-0 python3.9[219977]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:48 compute-0 sudo[219975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:48 compute-0 sudo[220053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqyyzwcdcixnlkfyucrmgqbmhxbgzlqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015008.1489303-3791-271911048942703/AnsiballZ_file.py'
Dec 06 09:56:48 compute-0 sudo[220053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:48 compute-0 sudo[220054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:56:48 compute-0 sudo[220054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:48 compute-0 sudo[220054]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:49 compute-0 python3.9[220058]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:49 compute-0 sudo[220053]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:49 compute-0 ceph-mon[74327]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 06 09:56:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:49 compute-0 sudo[220232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reythkqrdzqafnrzozfynvhjwayrmaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015009.4021032-3827-120876137293232/AnsiballZ_stat.py'
Dec 06 09:56:49 compute-0 sudo[220232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:49 compute-0 python3.9[220234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002c90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:49 compute-0 sudo[220232]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:50 compute-0 sudo[220310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmhiimeuklooefpiwkeevrwxxdgaamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015009.4021032-3827-120876137293232/AnsiballZ_file.py'
Dec 06 09:56:50 compute-0 sudo[220310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:50 compute-0 python3.9[220312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:50 compute-0 sudo[220310]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:56:50 compute-0 sudo[220474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkcokuyipwsiwupuugmldjouawlhywdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015010.642439-3863-187281182801856/AnsiballZ_stat.py'
Dec 06 09:56:50 compute-0 sudo[220474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:51 compute-0 podman[220436]: 2025-12-06 09:56:51.028302077 +0000 UTC m=+0.092422685 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 09:56:51 compute-0 python3.9[220483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:51 compute-0 sudo[220474]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:51 compute-0 ceph-mon[74327]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:51 compute-0 sudo[220567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icakyefuxxyovzyojrlynpilbtbathtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015010.642439-3863-187281182801856/AnsiballZ_file.py'
Dec 06 09:56:51 compute-0 sudo[220567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:51 compute-0 python3.9[220569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:51 compute-0 sudo[220567]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:56:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:56:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:52 compute-0 sudo[220669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:56:52 compute-0 sudo[220669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:52 compute-0 sudo[220669]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:52 compute-0 sudo[220718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:56:52 compute-0 sudo[220718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:52 compute-0 sudo[220769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtooubxrgjereuylhmnxvltuiqayoxro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015011.925411-3899-281101420578131/AnsiballZ_stat.py'
Dec 06 09:56:52 compute-0 sudo[220769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:52 compute-0 python3.9[220771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:52 compute-0 sudo[220769]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:52 compute-0 sudo[220718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:56:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:56:53 compute-0 sudo[220930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahcwohgqevvikuxyfalwnwrlcznojgsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015011.925411-3899-281101420578131/AnsiballZ_copy.py'
Dec 06 09:56:53 compute-0 sudo[220930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:53 compute-0 sudo[220922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:56:53 compute-0 sudo[220922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:53 compute-0 sudo[220922]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:53 compute-0 sudo[220953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:56:53 compute-0 sudo[220953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:53 compute-0 python3.9[220947]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765015011.925411-3899-281101420578131/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:53 compute-0 sudo[220930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:53 compute-0 ceph-mon[74327]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:56:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.574816344 +0000 UTC m=+0.044864362 container create 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:53 compute-0 systemd[1]: Started libpod-conmon-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope.
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.555108472 +0000 UTC m=+0.025156510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.672177041 +0000 UTC m=+0.142225089 container init 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.685215974 +0000 UTC m=+0.155263982 container start 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.688136512 +0000 UTC m=+0.158184530 container attach 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:56:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:53 compute-0 vigorous_bartik[221126]: 167 167
Dec 06 09:56:53 compute-0 systemd[1]: libpod-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope: Deactivated successfully.
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.693732723 +0000 UTC m=+0.163780761 container died 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec0d31c56aa7793ceda9f02cdcc236267312e0a81eb8fae695a4f5d400360a5e-merged.mount: Deactivated successfully.
Dec 06 09:56:53 compute-0 podman[221067]: 2025-12-06 09:56:53.739431106 +0000 UTC m=+0.209479124 container remove 1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 09:56:53 compute-0 systemd[1]: libpod-conmon-1238ee6a0bb29ff84f171b2df0df21ddb0f98dd311fa2c8cf7645059211771e6.scope: Deactivated successfully.
Dec 06 09:56:53 compute-0 sudo[221204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddhonujvywdcxfzambgamhggzsvpcocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015013.5200367-3944-258388971344987/AnsiballZ_file.py'
Dec 06 09:56:53 compute-0 sudo[221204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:56:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:53 compute-0 podman[221212]: 2025-12-06 09:56:53.924967693 +0000 UTC m=+0.060268457 container create a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:56:53 compute-0 systemd[1]: Started libpod-conmon-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope.
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:53 compute-0 podman[221212]: 2025-12-06 09:56:53.896201517 +0000 UTC m=+0.031502371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:56:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:56:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:54 compute-0 podman[221212]: 2025-12-06 09:56:54.022148245 +0000 UTC m=+0.157449059 container init a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 09:56:54 compute-0 python3.9[221206]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:54 compute-0 podman[221212]: 2025-12-06 09:56:54.030436479 +0000 UTC m=+0.165737243 container start a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 09:56:54 compute-0 podman[221212]: 2025-12-06 09:56:54.034875088 +0000 UTC m=+0.170175852 container attach a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:56:54 compute-0 sudo[221204]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:54.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.227 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:56:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:56:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:56:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:56:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:56:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:56:54 compute-0 interesting_khayyam[221228]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:56:54 compute-0 interesting_khayyam[221228]: --> All data devices are unavailable
Dec 06 09:56:54 compute-0 systemd[1]: libpod-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope: Deactivated successfully.
Dec 06 09:56:54 compute-0 podman[221212]: 2025-12-06 09:56:54.399838487 +0000 UTC m=+0.535139251 container died a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9197d0ade69b700cd333670e982e55db8544d24bc0507ff1954b7c0ebe15d7b-merged.mount: Deactivated successfully.
Dec 06 09:56:54 compute-0 podman[221212]: 2025-12-06 09:56:54.448368087 +0000 UTC m=+0.583668841 container remove a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_khayyam, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 09:56:54 compute-0 systemd[1]: libpod-conmon-a4258d3429dc782406b99a85100047a4b9bad36a77f89f2ff88aaef3bfd909e8.scope: Deactivated successfully.
Dec 06 09:56:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:56:54 compute-0 sudo[220953]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:54 compute-0 sudo[221402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjhdosepnptrcuxytocrmhccpcpbmkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015014.2387857-3968-7745508200201/AnsiballZ_command.py'
Dec 06 09:56:54 compute-0 sudo[221402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:54 compute-0 sudo[221403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:56:54 compute-0 sudo[221403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:54 compute-0 sudo[221403]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:54 compute-0 sudo[221430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:56:54 compute-0 sudo[221430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:54 compute-0 python3.9[221410]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:54 compute-0 sudo[221402]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.071722827 +0000 UTC m=+0.054462830 container create 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:56:55 compute-0 systemd[1]: Started libpod-conmon-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope.
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.044232336 +0000 UTC m=+0.026972439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.167040619 +0000 UTC m=+0.149780642 container init 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.174910662 +0000 UTC m=+0.157650665 container start 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.178229121 +0000 UTC m=+0.160969144 container attach 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 09:56:55 compute-0 jovial_rubin[221593]: 167 167
Dec 06 09:56:55 compute-0 systemd[1]: libpod-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope: Deactivated successfully.
Dec 06 09:56:55 compute-0 conmon[221593]: conmon 62f8063e0b7f94a41f95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope/container/memory.events
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.182069955 +0000 UTC m=+0.164809988 container died 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3abcd1be01ed1c672d71d63466d8b9f16afe5f879308bb32461651d27a51bd96-merged.mount: Deactivated successfully.
Dec 06 09:56:55 compute-0 podman[221576]: 2025-12-06 09:56:55.221011816 +0000 UTC m=+0.203751819 container remove 62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:55 compute-0 systemd[1]: libpod-conmon-62f8063e0b7f94a41f95efe3b2d273748d64711c135e1f7c18204f7b6ccade68.scope: Deactivated successfully.
Dec 06 09:56:55 compute-0 sudo[221686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgovdkgxyjswoaciretcucpbpihbqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015014.93047-3992-188992066210276/AnsiballZ_blockinfile.py'
Dec 06 09:56:55 compute-0 sudo[221686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.441540537 +0000 UTC m=+0.055233821 container create 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:56:55 compute-0 systemd[1]: Started libpod-conmon-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope.
Dec 06 09:56:55 compute-0 ceph-mon[74327]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.414886168 +0000 UTC m=+0.028579432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:55 compute-0 python3.9[221691]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.541145745 +0000 UTC m=+0.154839069 container init 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.551240748 +0000 UTC m=+0.164934032 container start 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.555857222 +0000 UTC m=+0.169550566 container attach 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 09:56:55 compute-0 sudo[221686]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:55 compute-0 clever_borg[221710]: {
Dec 06 09:56:55 compute-0 clever_borg[221710]:     "1": [
Dec 06 09:56:55 compute-0 clever_borg[221710]:         {
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "devices": [
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "/dev/loop3"
Dec 06 09:56:55 compute-0 clever_borg[221710]:             ],
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "lv_name": "ceph_lv0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "lv_size": "21470642176",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "name": "ceph_lv0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "tags": {
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.cluster_name": "ceph",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.crush_device_class": "",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.encrypted": "0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.osd_id": "1",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.type": "block",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.vdo": "0",
Dec 06 09:56:55 compute-0 clever_borg[221710]:                 "ceph.with_tpm": "0"
Dec 06 09:56:55 compute-0 clever_borg[221710]:             },
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "type": "block",
Dec 06 09:56:55 compute-0 clever_borg[221710]:             "vg_name": "ceph_vg0"
Dec 06 09:56:55 compute-0 clever_borg[221710]:         }
Dec 06 09:56:55 compute-0 clever_borg[221710]:     ]
Dec 06 09:56:55 compute-0 clever_borg[221710]: }
Dec 06 09:56:55 compute-0 systemd[1]: libpod-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope: Deactivated successfully.
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.889732891 +0000 UTC m=+0.503426145 container died 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:56:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-76ede41ad918be1b1816f818664d0959174b51785fa887f6950e83dc82fad925-merged.mount: Deactivated successfully.
Dec 06 09:56:55 compute-0 podman[221694]: 2025-12-06 09:56:55.938944009 +0000 UTC m=+0.552637273 container remove 32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_borg, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:56:55 compute-0 systemd[1]: libpod-conmon-32c32ca00e54fa828200ef51181a1d3cec823fa811651bb30ed54a91a0f495f8.scope: Deactivated successfully.
Dec 06 09:56:55 compute-0 sudo[221430]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:56 compute-0 sudo[221808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:56:56 compute-0 sudo[221808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:56 compute-0 sudo[221808]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:56 compute-0 sudo[221856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:56:56 compute-0 sudo[221856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:56 compute-0 sudo[221931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqskeortuoesytognqempnuqyupgmqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015015.9002757-4019-81358742933157/AnsiballZ_command.py'
Dec 06 09:56:56 compute-0 sudo[221931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:56 compute-0 python3.9[221933]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:56 compute-0 sudo[221931]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.633904643 +0000 UTC m=+0.053339661 container create 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:56:56 compute-0 systemd[1]: Started libpod-conmon-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope.
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.612459734 +0000 UTC m=+0.031894792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.728892506 +0000 UTC m=+0.148327534 container init 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.740602182 +0000 UTC m=+0.160037190 container start 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.744721623 +0000 UTC m=+0.164156661 container attach 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:56:56 compute-0 awesome_snyder[222022]: 167 167
Dec 06 09:56:56 compute-0 systemd[1]: libpod-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope: Deactivated successfully.
Dec 06 09:56:56 compute-0 conmon[222022]: conmon 01845209bf092faadda9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope/container/memory.events
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.748210437 +0000 UTC m=+0.167645445 container died 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Dec 06 09:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b8cba4244b40e648a46ba1124b3317889e24b213f87a33dd6e826666e582140-merged.mount: Deactivated successfully.
Dec 06 09:56:56 compute-0 podman[222001]: 2025-12-06 09:56:56.793013216 +0000 UTC m=+0.212448244 container remove 01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:56:56 compute-0 systemd[1]: libpod-conmon-01845209bf092faadda937c559d0c20d5860fbc66ef0f3780bf2cb1262f3961e.scope: Deactivated successfully.
Dec 06 09:56:57 compute-0 podman[222123]: 2025-12-06 09:56:57.025565202 +0000 UTC m=+0.074675627 container create da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:56:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:56:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:56:57 compute-0 systemd[1]: Started libpod-conmon-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope.
Dec 06 09:56:57 compute-0 podman[222123]: 2025-12-06 09:56:56.99250865 +0000 UTC m=+0.041619165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:56:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:56:57 compute-0 sudo[222186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqjitdbfihyilxrqvsbhymzsinmpklz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015016.7265618-4043-227973261231123/AnsiballZ_stat.py'
Dec 06 09:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:56:57 compute-0 sudo[222186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:57 compute-0 podman[222123]: 2025-12-06 09:56:57.139534938 +0000 UTC m=+0.188645403 container init da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 09:56:57 compute-0 podman[222123]: 2025-12-06 09:56:57.15630664 +0000 UTC m=+0.205417045 container start da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:57 compute-0 podman[222123]: 2025-12-06 09:56:57.160328808 +0000 UTC m=+0.209439233 container attach da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:56:57 compute-0 python3.9[222189]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:57 compute-0 sudo[222186]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:57 compute-0 ceph-mon[74327]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:57 compute-0 sudo[222423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdihzhamrzvscfodrojcecopmktwclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015017.5429132-4067-12983432288067/AnsiballZ_command.py'
Dec 06 09:56:57 compute-0 sudo[222423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:57 compute-0 podman[222378]: 2025-12-06 09:56:57.830055091 +0000 UTC m=+0.059544398 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 09:56:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:57 compute-0 lvm[222439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:56:57 compute-0 lvm[222439]: VG ceph_vg0 finished
Dec 06 09:56:57 compute-0 upbeat_noether[222184]: {}
Dec 06 09:56:58 compute-0 systemd[1]: libpod-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Deactivated successfully.
Dec 06 09:56:58 compute-0 systemd[1]: libpod-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Consumed 1.337s CPU time.
Dec 06 09:56:58 compute-0 podman[222123]: 2025-12-06 09:56:58.014377315 +0000 UTC m=+1.063487720 container died da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:56:58 compute-0 python3.9[222432]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0ddfbcd1a51cd96c5ae5583fe322e9e916b5737437761038b7d14753c27cbab-merged.mount: Deactivated successfully.
Dec 06 09:56:58 compute-0 sudo[222423]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:58 compute-0 podman[222123]: 2025-12-06 09:56:58.097518199 +0000 UTC m=+1.146628614 container remove da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 09:56:58 compute-0 systemd[1]: libpod-conmon-da7c415bacdf8bf4d578cc3f6cfa645370e985c838bab8fdc32ed0c9e24a1c52.scope: Deactivated successfully.
Dec 06 09:56:58 compute-0 sudo[221856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:56:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:58 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:56:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:58 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:56:58.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:58 compute-0 sudo[222480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:56:58 compute-0 sudo[222480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:56:58 compute-0 sudo[222480]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:56:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:56:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:56:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:56:58 compute-0 sudo[222630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhfdhlactukovkruwnrvihtqhekhhevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015018.2845044-4091-225698412267031/AnsiballZ_file.py'
Dec 06 09:56:58 compute-0 sudo[222630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:58 compute-0 python3.9[222632]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:58 compute-0 sudo[222630]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:59 compute-0 ceph-mon[74327]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:56:59 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:56:59 compute-0 sudo[222783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzigeqhyrihwdkblqmfniuxdgyaurvqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015019.05327-4115-272003002836899/AnsiballZ_stat.py'
Dec 06 09:56:59 compute-0 sudo[222783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:59 compute-0 python3.9[222785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:59 compute-0 sudo[222783]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:56:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:56:59 compute-0 sudo[222907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfslnlsbghjgexvtfkaorvradnhbkit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015019.05327-4115-272003002836899/AnsiballZ_copy.py'
Dec 06 09:56:59 compute-0 sudo[222907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:00 compute-0 python3.9[222909]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015019.05327-4115-272003002836899/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:00 compute-0 sudo[222907]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:00 compute-0 sudo[223059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urljstszmrkenijszyvfnsjzfhaxevzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015020.3820698-4160-167767220194968/AnsiballZ_stat.py'
Dec 06 09:57:00 compute-0 sudo[223059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:00 compute-0 python3.9[223061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:00 compute-0 sudo[223059]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:01 compute-0 ceph-mon[74327]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:01 compute-0 sudo[223183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plyxhionanrnliamniqdnkbfjrryxodp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015020.3820698-4160-167767220194968/AnsiballZ_copy.py'
Dec 06 09:57:01 compute-0 sudo[223183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:01 compute-0 python3.9[223185]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015020.3820698-4160-167767220194968/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:01 compute-0 sudo[223183]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:01 compute-0 sudo[223336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xldhuaypdtlkgeuhbzhvxvndaqmxgnve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015021.6593738-4205-222152015128410/AnsiballZ_stat.py'
Dec 06 09:57:01 compute-0 sudo[223336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:02 compute-0 python3.9[223338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:02 compute-0 sudo[223336]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:02 compute-0 sudo[223459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaqwfesuugshuywrkmvjrrrttzlqxigg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015021.6593738-4205-222152015128410/AnsiballZ_copy.py'
Dec 06 09:57:02 compute-0 sudo[223459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:02 compute-0 python3.9[223461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015021.6593738-4205-222152015128410/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:02 compute-0 sudo[223459]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:03 compute-0 ceph-mon[74327]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:03 compute-0 sudo[223612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzevkogveioqjridydjqfwrgwzkwpmow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015022.9775906-4250-140513773775343/AnsiballZ_systemd.py'
Dec 06 09:57:03 compute-0 sudo[223612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:03 compute-0 python3.9[223614]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:03 compute-0 systemd[1]: Reloading.
Dec 06 09:57:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:03 compute-0 systemd-rc-local-generator[223639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:57:03 compute-0 systemd-sysv-generator[223644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:57:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240035b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:04 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 06 09:57:04 compute-0 sudo[223612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:04 compute-0 sudo[223803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndeuemeauvfekmcmcgxnhicdvpouaqeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015024.256647-4274-21218022368026/AnsiballZ_systemd.py'
Dec 06 09:57:04 compute-0 sudo[223803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:04 compute-0 python3.9[223805]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:57:04 compute-0 systemd[1]: Reloading.
Dec 06 09:57:04 compute-0 systemd-sysv-generator[223836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:57:04 compute-0 systemd-rc-local-generator[223833]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:57:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:05 compute-0 ceph-mon[74327]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:05 compute-0 systemd[1]: Reloading.
Dec 06 09:57:05 compute-0 systemd-sysv-generator[223873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:57:05 compute-0 systemd-rc-local-generator[223870]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:57:05 compute-0 sudo[223803]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:06 compute-0 sshd-session[162397]: Connection closed by 192.168.122.30 port 36678
Dec 06 09:57:06 compute-0 sshd-session[162394]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:57:06 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 06 09:57:06 compute-0 systemd[1]: session-53.scope: Consumed 3min 48.573s CPU time.
Dec 06 09:57:06 compute-0 systemd-logind[795]: Session 53 logged out. Waiting for processes to exit.
Dec 06 09:57:06 compute-0 systemd-logind[795]: Removed session 53.
Dec 06 09:57:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:06.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:07.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:07 compute-0 ceph-mon[74327]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500013a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:08.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:57:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:08 compute-0 sudo[223907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:57:08 compute-0 sudo[223907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:08 compute-0 sudo[223907]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:09 compute-0 ceph-mon[74327]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:10.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:57:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:57:11 compute-0 ceph-mon[74327]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:11 compute-0 sshd-session[223936]: Accepted publickey for zuul from 192.168.122.30 port 50514 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:57:11 compute-0 systemd-logind[795]: New session 54 of user zuul.
Dec 06 09:57:11 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 06 09:57:11 compute-0 sshd-session[223936]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:57:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:12.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:12.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:12 compute-0 python3.9[224089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:57:13 compute-0 ceph-mon[74327]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:14 compute-0 python3.9[224245]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:57:14 compute-0 network[224262]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:57:14 compute-0 network[224263]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:57:14 compute-0 network[224264]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:57:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:15 compute-0 ceph-mon[74327]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:16 compute-0 ceph-mon[74327]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:18 compute-0 sudo[224538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwiflcjaoaywqvicbfnpakogmcvkglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015038.0599382-101-200302326420274/AnsiballZ_setup.py'
Dec 06 09:57:18 compute-0 sudo[224538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:18 compute-0 python3.9[224540]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:57:18 compute-0 sudo[224538]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:19 compute-0 sudo[224623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eejmadgbntfioktplgzgkotoaorqqwcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015038.0599382-101-200302326420274/AnsiballZ_dnf.py'
Dec 06 09:57:19 compute-0 sudo[224623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:19 compute-0 ceph-mon[74327]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:19 compute-0 python3.9[224625]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:57:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47540095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:20.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:57:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:57:21 compute-0 podman[224629]: 2025-12-06 09:57:21.500431249 +0000 UTC m=+0.125731544 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 09:57:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:21 compute-0 ceph-mon[74327]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:22.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:22.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47540095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:22 compute-0 ceph-mon[74327]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002910 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:57:23
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root']
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:57:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:57:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:24.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:57:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:57:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:25 compute-0 ceph-mon[74327]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:25 compute-0 sudo[224623]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:26 compute-0 sudo[224808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvtzvzfzgxnhstsxsrtpvjwjxtqcjjut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015045.6407976-137-15410914812231/AnsiballZ_stat.py'
Dec 06 09:57:26 compute-0 sudo[224808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:26.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:26 compute-0 python3.9[224810]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:26 compute-0 sudo[224808]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750003ab0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:27 compute-0 sudo[224961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcoyvvtayydhmkncsjgqmckcvmmhkouj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015046.6602619-167-274227637752041/AnsiballZ_command.py'
Dec 06 09:57:27 compute-0 sudo[224961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:27 compute-0 python3.9[224963]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:27 compute-0 sudo[224961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:27 compute-0 ceph-mon[74327]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:28 compute-0 sudo[225127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjywjcfqjvyiqsgvbemlxokdkmnhshxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015047.8078063-197-27017077258404/AnsiballZ_stat.py'
Dec 06 09:57:28 compute-0 sudo[225127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:28 compute-0 podman[225089]: 2025-12-06 09:57:28.129436321 +0000 UTC m=+0.055473238 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 09:57:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:28.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:28 compute-0 python3.9[225133]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:28 compute-0 sudo[225127]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:28 compute-0 sudo[225286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwlimeffwzfjxpjmlixvchdlkdswytoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015048.521436-221-164205281753944/AnsiballZ_command.py'
Dec 06 09:57:28 compute-0 sudo[225286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:29 compute-0 python3.9[225288]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:29 compute-0 sudo[225286]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:29 compute-0 ceph-mon[74327]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:29 compute-0 sudo[225291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:57:29 compute-0 sudo[225291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:29 compute-0 sudo[225291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:29 compute-0 sudo[225466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqiitkdyhbzenvnpzzceiouewlwacrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015049.2918446-245-182972996494157/AnsiballZ_stat.py'
Dec 06 09:57:29 compute-0 sudo[225466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:29 compute-0 python3.9[225468]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:29 compute-0 sudo[225466]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:30 compute-0 sudo[225589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsukatxwxaotwrvecnkqovalvvpkiquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015049.2918446-245-182972996494157/AnsiballZ_copy.py'
Dec 06 09:57:30 compute-0 sudo[225589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:30 compute-0 python3.9[225591]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015049.2918446-245-182972996494157/.source.iscsi _original_basename=.hf4jdjk9 follow=False checksum=99526e0d7ff5604cf6666b9c8f5aa83fcb820e36 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:30 compute-0 sudo[225589]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:57:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:57:31 compute-0 ceph-mon[74327]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:31 compute-0 sudo[225742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgyipivzlagzvxodnwiuotzxowmlsrgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015050.8237169-290-11277847739380/AnsiballZ_file.py'
Dec 06 09:57:31 compute-0 sudo[225742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:31 compute-0 python3.9[225744]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:31 compute-0 sudo[225742]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:32 compute-0 sudo[225895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklconhylgggrixbpcuapxjixibufycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015051.6632423-314-57117177115140/AnsiballZ_lineinfile.py'
Dec 06 09:57:32 compute-0 sudo[225895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:32.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:32 compute-0 python3.9[225897]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:32 compute-0 sudo[225895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:32.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:33 compute-0 sudo[226048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlrcjffdtlmchjjzitownvrivakcmecq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015052.5921054-341-231847665703805/AnsiballZ_systemd_service.py'
Dec 06 09:57:33 compute-0 sudo[226048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:33 compute-0 ceph-mon[74327]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:33 compute-0 python3.9[226050]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:33 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 06 09:57:33 compute-0 sudo[226048]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:34 compute-0 sudo[226205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthrbfpoynyzfrfjbzefywzuhgnrkxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015053.9135063-365-118686470504853/AnsiballZ_systemd_service.py'
Dec 06 09:57:34 compute-0 sudo[226205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:34.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:34.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:34 compute-0 python3.9[226207]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:34 compute-0 systemd[1]: Reloading.
Dec 06 09:57:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:34 compute-0 systemd-rc-local-generator[226235]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:57:34 compute-0 systemd-sysv-generator[226240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:57:34 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 09:57:34 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 06 09:57:34 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 06 09:57:34 compute-0 systemd[1]: Started Open-iSCSI.
Dec 06 09:57:34 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 06 09:57:35 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 06 09:57:35 compute-0 sudo[226205]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:35 compute-0 ceph-mon[74327]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:57:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:35 compute-0 sudo[226407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkmufwuayebjsgvbhqrutesjazwnpdli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015055.491377-398-58146143569790/AnsiballZ_service_facts.py'
Dec 06 09:57:35 compute-0 sudo[226407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:35 compute-0 python3.9[226409]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:57:36 compute-0 network[226426]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:57:36 compute-0 network[226427]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:57:36 compute-0 network[226428]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:57:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:36.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:37.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:37 compute-0 ceph-mon[74327]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:38.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:38.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:57:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:39 compute-0 sudo[226407]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:39 compute-0 ceph-mon[74327]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480041f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:40 compute-0 sudo[226703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnqksnfdrqptgeshoumpnqsrzgfmreg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015059.8214512-428-121921521975262/AnsiballZ_file.py'
Dec 06 09:57:40 compute-0 sudo[226703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095740 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:57:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:40 compute-0 python3.9[226705]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:57:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:40.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:40 compute-0 sudo[226703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:40 compute-0 ceph-mon[74327]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:41 compute-0 sudo[226856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxhbzsinoiemxeunwoyyrvimxzmnlqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015060.7149084-452-64957719114853/AnsiballZ_modprobe.py'
Dec 06 09:57:41 compute-0 sudo[226856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:41 compute-0 python3.9[226858]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 06 09:57:41 compute-0 sudo[226856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:41 compute-0 sudo[227014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckwdjdmwtqnctoeaktyzxbcbchpoxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015061.6399589-476-240160424103381/AnsiballZ_stat.py'
Dec 06 09:57:41 compute-0 sudo[227014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:42 compute-0 python3.9[227016]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:42 compute-0 sudo[227014]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:42 compute-0 sudo[227137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgrvdysxjhbxctqazoqmtqunreevvuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015061.6399589-476-240160424103381/AnsiballZ_copy.py'
Dec 06 09:57:42 compute-0 sudo[227137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:42 compute-0 python3.9[227139]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015061.6399589-476-240160424103381/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:42 compute-0 sudo[227137]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:43 compute-0 sudo[227290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehgmxzpvbwkignpwrtxzakgykmdnryll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015063.0692124-524-145596432714019/AnsiballZ_lineinfile.py'
Dec 06 09:57:43 compute-0 sudo[227290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:43 compute-0 ceph-mon[74327]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:57:43 compute-0 python3.9[227292]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:43 compute-0 sudo[227290]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:57:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3994 writes, 18K keys, 3993 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 3994 writes, 3993 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1469 writes, 6211 keys, 1469 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s
                                           Interval WAL: 1469 writes, 1469 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.2      0.34              0.07         8    0.043       0      0       0.0       0.0
                                             L6      1/0   11.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     92.1     78.9      1.15              0.26         7    0.164     32K   3649       0.0       0.0
                                            Sum      1/0   11.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     70.8     78.7      1.49              0.33        15    0.100     32K   3649       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     82.3     79.2      0.76              0.18         8    0.094     20K   2298       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     92.1     78.9      1.15              0.26         7    0.164     32K   3649       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     79.6      0.34              0.07         7    0.048       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.026, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 4.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000137 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(267,4.47 MB,1.47055%) FilterBlock(16,100.92 KB,0.0324199%) IndexBlock(16,194.95 KB,0.0626263%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 09:57:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:57:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:44 compute-0 sudo[227443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xclxzdtwhdmzinvmcylfekeamlrkktfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015063.7599943-548-105358914400282/AnsiballZ_systemd.py'
Dec 06 09:57:44 compute-0 sudo[227443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:44 compute-0 python3.9[227445]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:57:44 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 09:57:44 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 06 09:57:44 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 06 09:57:44 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 06 09:57:44 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 06 09:57:44 compute-0 sudo[227443]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:45 compute-0 sudo[227601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzusjdeokoiwphwogqhjzalbfuvxruoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015065.1158855-572-66759582341358/AnsiballZ_file.py'
Dec 06 09:57:45 compute-0 sudo[227601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:45 compute-0 ceph-mon[74327]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:57:45 compute-0 python3.9[227603]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:45 compute-0 sudo[227601]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:57:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:46 compute-0 sudo[227753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvnhwbmxfzfiopygtwhmuzfgqrtawtam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015066.1406739-599-116320276103761/AnsiballZ_stat.py'
Dec 06 09:57:46 compute-0 sudo[227753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:46 compute-0 python3.9[227755]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:46 compute-0 sudo[227753]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:47 compute-0 sudo[227906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcyyecdglxzckyiqqmokwhqczhawhudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015066.9204054-626-231145873366918/AnsiballZ_stat.py'
Dec 06 09:57:47 compute-0 sudo[227906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:47 compute-0 python3.9[227908]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:47 compute-0 sudo[227906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:47 compute-0 ceph-mon[74327]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:57:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:47 compute-0 sudo[228059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruvymhghofczsxhpireafxcvzqxnpmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015067.6142457-650-215611357130144/AnsiballZ_stat.py'
Dec 06 09:57:47 compute-0 sudo[228059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:48 compute-0 python3.9[228061]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:48 compute-0 sudo[228059]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:48.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:57:48 compute-0 sudo[228182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rppthkpwpzxgxknwftcysrhrpnrgwhwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015067.6142457-650-215611357130144/AnsiballZ_copy.py'
Dec 06 09:57:48 compute-0 sudo[228182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:48 compute-0 python3.9[228184]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015067.6142457-650-215611357130144/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:48 compute-0 sudo[228182]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:49 compute-0 sudo[228308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:57:49 compute-0 sudo[228308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:49 compute-0 sudo[228308]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:49 compute-0 sudo[228361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drlumbnmhqnabtbbtaekgpnqcrdvhsur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015068.9297621-695-163281974026157/AnsiballZ_command.py'
Dec 06 09:57:49 compute-0 sudo[228361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:49 compute-0 ceph-mon[74327]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:49 compute-0 python3.9[228363]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:49 compute-0 sudo[228361]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:50 compute-0 sudo[228514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgcxxjaostmbzaxspavtitzzhppgamfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015069.9493246-719-268493681914560/AnsiballZ_lineinfile.py'
Dec 06 09:57:50 compute-0 sudo[228514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:50 compute-0 python3.9[228516]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:50 compute-0 sudo[228514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Dec 06 09:57:51 compute-0 sudo[228667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evmmpxypeknyrfegyaxtgpdkuumomolm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015070.6665616-743-15842793808204/AnsiballZ_replace.py'
Dec 06 09:57:51 compute-0 sudo[228667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 09:57:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 09:57:51 compute-0 python3.9[228669]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:51 compute-0 sudo[228667]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:51 compute-0 ceph-mon[74327]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:51 compute-0 sudo[228840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nexizhkkwimzcrdpsachwjvyqpyjgqjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015071.611723-767-199407656361277/AnsiballZ_replace.py'
Dec 06 09:57:51 compute-0 sudo[228840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:52 compute-0 podman[228794]: 2025-12-06 09:57:52.009723936 +0000 UTC m=+0.133067361 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:57:52 compute-0 python3.9[228846]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:52 compute-0 sudo[228840]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:52 compute-0 sudo[229002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvskjiijzjmmdqlvsibjxvoqselsjuhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015072.4384303-794-221975433464079/AnsiballZ_lineinfile.py'
Dec 06 09:57:52 compute-0 sudo[229002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:52 compute-0 python3.9[229004]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:52 compute-0 sudo[229002]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:53 compute-0 sudo[229156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjijieqwbemasbvdsiydvmjsipbmdpzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015073.1568563-794-201954876249915/AnsiballZ_lineinfile.py'
Dec 06 09:57:53 compute-0 sudo[229156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:53 compute-0 ceph-mon[74327]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:57:53 compute-0 python3.9[229158]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:53 compute-0 sudo[229156]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:57:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:57:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:57:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:57:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.229 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:57:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:57:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:57:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:57:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 09:57:54 compute-0 sudo[229308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfvbvxothwqmealrmtckulgqdtwvvrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015074.185189-794-76032042733058/AnsiballZ_lineinfile.py'
Dec 06 09:57:54 compute-0 sudo[229308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:54 compute-0 python3.9[229310]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:54 compute-0 sudo[229308]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:57:55 compute-0 sudo[229461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duphfvbvwlfuzwutwbufdlhtoibycbct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015074.8645263-794-41259054369038/AnsiballZ_lineinfile.py'
Dec 06 09:57:55 compute-0 sudo[229461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:57:55 compute-0 python3.9[229463]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:55 compute-0 sudo[229461]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:55 compute-0 ceph-mon[74327]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:57:55 compute-0 sudo[229614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldxwkrvmcrykywwbnqmzoltmpstmdgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015075.614023-881-125821322099014/AnsiballZ_stat.py'
Dec 06 09:57:55 compute-0 sudo[229614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:56 compute-0 python3.9[229616]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:56 compute-0 sudo[229614]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:57:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:57:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:56.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:57:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:56.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:56 compute-0 sudo[229768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlxanelmvspozsdasdeebnvoiavciqgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015076.3911905-905-87099140188159/AnsiballZ_file.py'
Dec 06 09:57:56 compute-0 sudo[229768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:56 compute-0 ceph-mon[74327]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:57:56 compute-0 python3.9[229770]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:57 compute-0 sudo[229768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:57.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:57:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:57:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:57:57 compute-0 sudo[229922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxoqksedrmnwercwlpvizdljthsukygo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015077.35069-932-93251461867108/AnsiballZ_file.py'
Dec 06 09:57:57 compute-0 sudo[229922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:57 compute-0 python3.9[229924]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:58 compute-0 sudo[229922]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:57:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:57:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:57:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:57:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:57:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:57:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:57:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:57:58 compute-0 podman[229949]: 2025-12-06 09:57:58.484407104 +0000 UTC m=+0.105034385 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 09:57:58 compute-0 sudo[229978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:57:58 compute-0 sudo[229978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:58 compute-0 sudo[229978]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:58 compute-0 sudo[230028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:57:58 compute-0 sudo[230028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:58 compute-0 sudo[230150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfzhumrqcxxiombgpolljmkfxjwkunt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015078.503025-956-172565387074532/AnsiballZ_stat.py'
Dec 06 09:57:58 compute-0 sudo[230150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:58 compute-0 python3.9[230156]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:59 compute-0 sudo[230150]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:59 compute-0 sudo[230028]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:59 compute-0 ceph-mon[74327]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:57:59 compute-0 sudo[230252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imowplxltobvuvkxrcdhspcgothydkop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015078.503025-956-172565387074532/AnsiballZ_file.py'
Dec 06 09:57:59 compute-0 sudo[230252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:57:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:57:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:57:59 compute-0 sudo[230255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:57:59 compute-0 sudo[230255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:59 compute-0 sudo[230255]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:59 compute-0 sudo[230280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:57:59 compute-0 sudo[230280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:57:59 compute-0 python3.9[230254]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:59 compute-0 sudo[230252]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:57:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:57:59 compute-0 podman[230471]: 2025-12-06 09:57:59.966717534 +0000 UTC m=+0.050069572 container create e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:58:00 compute-0 systemd[1]: Started libpod-conmon-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope.
Dec 06 09:58:00 compute-0 sudo[230514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xthwtlnepbobnzahngznundtaynmmvoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015079.6609337-956-156774259771058/AnsiballZ_stat.py'
Dec 06 09:58:00 compute-0 sudo[230514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:57:59.943323462 +0000 UTC m=+0.026675550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:58:00.064627546 +0000 UTC m=+0.147979624 container init e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:58:00.081640965 +0000 UTC m=+0.164993003 container start e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:58:00.086179187 +0000 UTC m=+0.169531275 container attach e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec 06 09:58:00 compute-0 systemd[1]: libpod-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope: Deactivated successfully.
Dec 06 09:58:00 compute-0 exciting_villani[230516]: 167 167
Dec 06 09:58:00 compute-0 conmon[230516]: conmon e09fe20b244d7f95e13d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope/container/memory.events
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:58:00.094923104 +0000 UTC m=+0.178275182 container died e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:58:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9757e85599f436fc206a07db0eac5a53cc25fa8c53bda0f852433e8d9b5684-merged.mount: Deactivated successfully.
Dec 06 09:58:00 compute-0 podman[230471]: 2025-12-06 09:58:00.145427977 +0000 UTC m=+0.228780025 container remove e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 09:58:00 compute-0 systemd[1]: libpod-conmon-e09fe20b244d7f95e13dcbcae92827560a23f0b8d7d1754387e47ef9e63b383e.scope: Deactivated successfully.
Dec 06 09:58:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:00 compute-0 python3.9[230518]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095800 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:58:00 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:58:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:00 compute-0 sudo[230514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 09:58:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 09:58:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.386458881 +0000 UTC m=+0.069042505 container create 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:58:00 compute-0 systemd[1]: Started libpod-conmon-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope.
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.356996385 +0000 UTC m=+0.039580029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.513972621 +0000 UTC m=+0.196556295 container init 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.530258831 +0000 UTC m=+0.212842455 container start 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.536043097 +0000 UTC m=+0.218626691 container attach 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:58:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:00 compute-0 sudo[230637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyagugjgqubvuclijyfvbvhkziirbopb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015079.6609337-956-156774259771058/AnsiballZ_file.py'
Dec 06 09:58:00 compute-0 sudo[230637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:00 compute-0 python3.9[230639]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:58:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:58:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:58:00 compute-0 sudo[230637]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:00 compute-0 stoic_maxwell[230583]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:58:00 compute-0 stoic_maxwell[230583]: --> All data devices are unavailable
Dec 06 09:58:00 compute-0 systemd[1]: libpod-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope: Deactivated successfully.
Dec 06 09:58:00 compute-0 podman[230543]: 2025-12-06 09:58:00.979080412 +0000 UTC m=+0.661664036 container died 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 09:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b77580a8fa4a77d6aee359b8d7fbde02a56a89920cc2538fd57d60d575ac28-merged.mount: Deactivated successfully.
Dec 06 09:58:01 compute-0 podman[230543]: 2025-12-06 09:58:01.054011364 +0000 UTC m=+0.736594988 container remove 17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_maxwell, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:58:01 compute-0 systemd[1]: libpod-conmon-17fb08baa149dad5cde045ba934aa2d59fa4a74e49c56e343d4d6bfe2b175f1d.scope: Deactivated successfully.
Dec 06 09:58:01 compute-0 sudo[230280]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:01 compute-0 sudo[230712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:58:01 compute-0 sudo[230712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:01 compute-0 sudo[230712]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:01 compute-0 sudo[230770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:58:01 compute-0 sudo[230770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:01 compute-0 ceph-mon[74327]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:01 compute-0 sudo[230865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoqccpsmvkvucfxtwvuypfxzjfhbahhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015081.1296282-1025-50001785195200/AnsiballZ_file.py'
Dec 06 09:58:01 compute-0 sudo[230865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:01 compute-0 python3.9[230869]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:01 compute-0 sudo[230865]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.747356503 +0000 UTC m=+0.052176778 container create 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:01 compute-0 systemd[1]: Started libpod-conmon-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope.
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.730605221 +0000 UTC m=+0.035425516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.84730344 +0000 UTC m=+0.152123735 container init 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.860705132 +0000 UTC m=+0.165525447 container start 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.864958016 +0000 UTC m=+0.169778311 container attach 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:58:01 compute-0 gallant_stonebraker[230950]: 167 167
Dec 06 09:58:01 compute-0 systemd[1]: libpod-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope: Deactivated successfully.
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.8673216 +0000 UTC m=+0.172141885 container died 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-24783a5569f7ae41d9bc886eb9762347e9983a9d8f3db4787c570a9a8c107c60-merged.mount: Deactivated successfully.
Dec 06 09:58:01 compute-0 podman[230909]: 2025-12-06 09:58:01.906056236 +0000 UTC m=+0.210876531 container remove 9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:58:01 compute-0 systemd[1]: libpod-conmon-9a83bb914a1299a186221a3226c0c2216c2af8cfb7fe4f63d416a84953c85532.scope: Deactivated successfully.
Dec 06 09:58:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.138600531 +0000 UTC m=+0.079555147 container create e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:58:02 compute-0 systemd[1]: Started libpod-conmon-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope.
Dec 06 09:58:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.10856179 +0000 UTC m=+0.049516446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.241123708 +0000 UTC m=+0.182078304 container init e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.256704118 +0000 UTC m=+0.197658694 container start e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.260318896 +0000 UTC m=+0.201273482 container attach e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:58:02 compute-0 sudo[231122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmmdrudjhrmmghjqzfwxiehqiytqkjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015081.904799-1049-136424522818786/AnsiballZ_stat.py'
Dec 06 09:58:02 compute-0 sudo[231122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:02 compute-0 python3.9[231124]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:02 compute-0 sudo[231122]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]: {
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:     "1": [
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:         {
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "devices": [
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "/dev/loop3"
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             ],
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "lv_name": "ceph_lv0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "lv_size": "21470642176",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "name": "ceph_lv0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "tags": {
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.cluster_name": "ceph",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.crush_device_class": "",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.encrypted": "0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.osd_id": "1",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.type": "block",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.vdo": "0",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:                 "ceph.with_tpm": "0"
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             },
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "type": "block",
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:             "vg_name": "ceph_vg0"
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:         }
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]:     ]
Dec 06 09:58:02 compute-0 happy_proskuriakova[231091]: }
Dec 06 09:58:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:02 compute-0 systemd[1]: libpod-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope: Deactivated successfully.
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.629920899 +0000 UTC m=+0.570875475 container died e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b8d765019fabf349068cfad4637ca779823692af5ef576ac75a3d49207d1bf9-merged.mount: Deactivated successfully.
Dec 06 09:58:02 compute-0 podman[231040]: 2025-12-06 09:58:02.688188801 +0000 UTC m=+0.629143417 container remove e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:58:02 compute-0 systemd[1]: libpod-conmon-e99ae4417a7f385c38fc06b9883dc42b0f0780bbe8b4d2d3c8ccaff40f6d03f3.scope: Deactivated successfully.
Dec 06 09:58:02 compute-0 sudo[230770]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:02 compute-0 sudo[231157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:58:02 compute-0 sudo[231157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:02 compute-0 sudo[231157]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:02 compute-0 sudo[231217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:58:02 compute-0 sudo[231217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:02 compute-0 sudo[231267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnghpkzfhqfcaugwftucrxsysojgnrpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015081.904799-1049-136424522818786/AnsiballZ_file.py'
Dec 06 09:58:02 compute-0 sudo[231267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:03 compute-0 python3.9[231270]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:03 compute-0 sudo[231267]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:03 compute-0 ceph-mon[74327]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.478692923 +0000 UTC m=+0.068177541 container create a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 09:58:03 compute-0 systemd[1]: Started libpod-conmon-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope.
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.45007108 +0000 UTC m=+0.039555538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.603196793 +0000 UTC m=+0.192681251 container init a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.616300857 +0000 UTC m=+0.205785265 container start a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.620761376 +0000 UTC m=+0.210245784 container attach a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:03 compute-0 romantic_visvesvaraya[231413]: 167 167
Dec 06 09:58:03 compute-0 systemd[1]: libpod-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope: Deactivated successfully.
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.625129025 +0000 UTC m=+0.214613433 container died a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 09:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-84d00063dfd1bec23dd70f230e633d20743d0f9fd0b63edb6a18704ee973c78a-merged.mount: Deactivated successfully.
Dec 06 09:58:03 compute-0 podman[231362]: 2025-12-06 09:58:03.675471843 +0000 UTC m=+0.264956201 container remove a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:03 compute-0 systemd[1]: libpod-conmon-a9515294f59e158ff7bf4b025b4a5ec65460ecc580ccc259ca619135b52cdcf8.scope: Deactivated successfully.
Dec 06 09:58:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:03 compute-0 sudo[231499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfosmviuxykzudtcplwhcxrmicmtbcbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015083.4120169-1085-241053986780205/AnsiballZ_stat.py'
Dec 06 09:58:03 compute-0 sudo[231499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:03 compute-0 podman[231507]: 2025-12-06 09:58:03.906464156 +0000 UTC m=+0.058828728 container create 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:58:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:03 compute-0 systemd[1]: Started libpod-conmon-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope.
Dec 06 09:58:03 compute-0 python3.9[231501]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:03 compute-0 podman[231507]: 2025-12-06 09:58:03.882684345 +0000 UTC m=+0.035048947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:58:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:04 compute-0 sudo[231499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:04 compute-0 podman[231507]: 2025-12-06 09:58:04.034028979 +0000 UTC m=+0.186393551 container init 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 09:58:04 compute-0 podman[231507]: 2025-12-06 09:58:04.04296953 +0000 UTC m=+0.195334102 container start 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:58:04 compute-0 podman[231507]: 2025-12-06 09:58:04.046657329 +0000 UTC m=+0.199021911 container attach 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 09:58:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:04 compute-0 sudo[231612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekrvlkwnmkwujuvxqkgjlojdyutnhmtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015083.4120169-1085-241053986780205/AnsiballZ_file.py'
Dec 06 09:58:04 compute-0 sudo[231612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:04 compute-0 python3.9[231616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:04 compute-0 sudo[231612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:04 compute-0 lvm[231745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:58:04 compute-0 lvm[231745]: VG ceph_vg0 finished
Dec 06 09:58:04 compute-0 serene_kare[231523]: {}
Dec 06 09:58:04 compute-0 lvm[231761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:58:04 compute-0 lvm[231761]: VG ceph_vg0 finished
Dec 06 09:58:04 compute-0 systemd[1]: libpod-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Deactivated successfully.
Dec 06 09:58:04 compute-0 systemd[1]: libpod-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Consumed 1.402s CPU time.
Dec 06 09:58:04 compute-0 podman[231507]: 2025-12-06 09:58:04.932717509 +0000 UTC m=+1.085082081 container died 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d5d2869b1c10c0be652d310010fcd8f1d40cd1f7c2dd4cc86500e51ac91111-merged.mount: Deactivated successfully.
Dec 06 09:58:04 compute-0 podman[231507]: 2025-12-06 09:58:04.994844386 +0000 UTC m=+1.147208958 container remove 92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:58:05 compute-0 systemd[1]: libpod-conmon-92305db47f02ade62e11c6ecd86a7b8152de33c5d80f9a7a0297add4c3f6cd69.scope: Deactivated successfully.
Dec 06 09:58:05 compute-0 sudo[231217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:58:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:58:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:05 compute-0 sudo[231846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmivdsdevqxcywqzahbzchfffdgenjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015084.7559743-1121-72924066625080/AnsiballZ_systemd.py'
Dec 06 09:58:05 compute-0 sudo[231846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:05 compute-0 sudo[231844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:58:05 compute-0 sudo[231844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:05 compute-0 sudo[231844]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:05 compute-0 ceph-mon[74327]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 09:58:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:58:05 compute-0 python3.9[231860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:58:05 compute-0 systemd[1]: Reloading.
Dec 06 09:58:05 compute-0 systemd-rc-local-generator[231896]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:05 compute-0 systemd-sysv-generator[231901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:05 compute-0 sudo[231846]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:58:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:06 compute-0 sudo[232060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksincsmswtvsrziqktfjqtahnckrxegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015086.186746-1145-102956173340001/AnsiballZ_stat.py'
Dec 06 09:58:06 compute-0 sudo[232060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:06 compute-0 python3.9[232062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:06 compute-0 sudo[232060]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:58:07 compute-0 sudo[232139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxszmyptjisymxjawarnyfqemageoei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015086.186746-1145-102956173340001/AnsiballZ_file.py'
Dec 06 09:58:07 compute-0 sudo[232139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:07 compute-0 python3.9[232141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:07 compute-0 ceph-mon[74327]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:58:07 compute-0 sudo[232139]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:08 compute-0 sudo[232292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbuahrfdsuwyaxgygvfgkqzgvujaqqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015087.796488-1181-88351844438174/AnsiballZ_stat.py'
Dec 06 09:58:08 compute-0 sudo[232292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:58:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:08.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:08 compute-0 python3.9[232294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:08 compute-0 sudo[232292]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:08 compute-0 sudo[232370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwuatyoauxdrwkbbysendsbmametmma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015087.796488-1181-88351844438174/AnsiballZ_file.py'
Dec 06 09:58:08 compute-0 sudo[232370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:58:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:08 compute-0 python3.9[232372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:09 compute-0 sudo[232370]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:09 compute-0 sudo[232425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:58:09 compute-0 sudo[232425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:09 compute-0 sudo[232425]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:09 compute-0 sudo[232549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukbjxurcqeailjfcgootskaaavtnllsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015089.219847-1217-86775372597959/AnsiballZ_systemd.py'
Dec 06 09:58:09 compute-0 sudo[232549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:09 compute-0 ceph-mon[74327]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:58:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:09 compute-0 python3.9[232551]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:58:09 compute-0 systemd[1]: Reloading.
Dec 06 09:58:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:10 compute-0 systemd-rc-local-generator[232580]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:10 compute-0 systemd-sysv-generator[232584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:10.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:10 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 09:58:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:58:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:58:10 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 09:58:10 compute-0 sudo[232549]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:10 compute-0 ceph-mon[74327]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:58:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:58:11 compute-0 sudo[232742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjhtrwdfccuynanrunkwgaoaubcymln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015090.9316525-1247-159460709028153/AnsiballZ_file.py'
Dec 06 09:58:11 compute-0 sudo[232742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:11 compute-0 python3.9[232744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:58:11 compute-0 sudo[232742]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:12 compute-0 sudo[232895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srahoamdtfwxlvseseuueazmhsrfxxsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015091.7682977-1271-159879340235732/AnsiballZ_stat.py'
Dec 06 09:58:12 compute-0 sudo[232895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:12.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:12 compute-0 python3.9[232897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:12 compute-0 sudo[232895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:12.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:12 compute-0 sudo[233018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppozzfgyfoahydsyalskechrvcejmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015091.7682977-1271-159879340235732/AnsiballZ_copy.py'
Dec 06 09:58:12 compute-0 sudo[233018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:12 compute-0 python3.9[233020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015091.7682977-1271-159879340235732/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:58:12 compute-0 sudo[233018]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:13 compute-0 ceph-mon[74327]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:13 compute-0 sudo[233174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovodhyruxredpjjesyipvynorosoomed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015093.5579634-1322-240427478623609/AnsiballZ_file.py'
Dec 06 09:58:13 compute-0 sudo[233174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:14 compute-0 python3.9[233176]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:58:14 compute-0 sudo[233174]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:14.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:14.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:14 compute-0 sudo[233326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlefwrcrvzqhbkiqopdahzhbimmtihbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015094.3925292-1346-19356272462720/AnsiballZ_stat.py'
Dec 06 09:58:14 compute-0 sudo[233326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:14 compute-0 python3.9[233328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:14 compute-0 sudo[233326]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:15 compute-0 ceph-mon[74327]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:15 compute-0 sudo[233450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whciklyaozagmxaamzemkhfjmgagfedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015094.3925292-1346-19356272462720/AnsiballZ_copy.py'
Dec 06 09:58:15 compute-0 sudo[233450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:15 compute-0 python3.9[233452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015094.3925292-1346-19356272462720/.source.json _original_basename=.wn7hfpfl follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:15 compute-0 sudo[233450]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:16 compute-0 sudo[233603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevubneedzqxfmsnptksppubdoummcrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015095.8953326-1391-199173890310150/AnsiballZ_file.py'
Dec 06 09:58:16 compute-0 sudo[233603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:16.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:16 compute-0 python3.9[233605]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:16 compute-0 sudo[233603]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:17.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:58:17 compute-0 sudo[233756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpmaejpbcjrmuljiiqckiwwozkwmcsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015096.734404-1415-95988624300868/AnsiballZ_stat.py'
Dec 06 09:58:17 compute-0 sudo[233756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:17 compute-0 ceph-mon[74327]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:17 compute-0 sudo[233756]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:17 compute-0 sudo[233880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyyijmrxdictzpqpzsuefhqrnqkozzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015096.734404-1415-95988624300868/AnsiballZ_copy.py'
Dec 06 09:58:17 compute-0 sudo[233880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:18 compute-0 sudo[233880]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:18.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:19 compute-0 sudo[234032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjjshatqqdtoneenrmfbvbqdvhcbtgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015098.5365512-1466-175240350408924/AnsiballZ_container_config_data.py'
Dec 06 09:58:19 compute-0 sudo[234032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:19 compute-0 python3.9[234035]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 06 09:58:19 compute-0 sudo[234032]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:19 compute-0 ceph-mon[74327]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:19 compute-0 sudo[234186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heuwmkfedflfdwnranyuunmnuuotqfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015099.553593-1493-229754365248785/AnsiballZ_container_config_hash.py'
Dec 06 09:58:20 compute-0 sudo[234186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:20 compute-0 python3.9[234188]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:58:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:20 compute-0 sudo[234186]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:58:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Dec 06 09:58:21 compute-0 sudo[234339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwggkibbcagyvxgpoqscubpvcgifcsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015100.5659418-1520-211109738864323/AnsiballZ_podman_container_info.py'
Dec 06 09:58:21 compute-0 sudo[234339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:21 compute-0 python3.9[234341]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:58:21 compute-0 ceph-mon[74327]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:21 compute-0 sudo[234339]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:22 compute-0 podman[234394]: 2025-12-06 09:58:22.466435407 +0000 UTC m=+0.088410025 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:58:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240014e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:23 compute-0 sudo[234546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvgcmbxdrcomwjwmnwhjjrlgysizchwt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015102.6229393-1559-58651225921940/AnsiballZ_edpm_container_manage.py'
Dec 06 09:58:23 compute-0 sudo[234546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:23 compute-0 ceph-mon[74327]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:23 compute-0 python3[234548]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:58:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:58:23
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'images', 'backups', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:58:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:58:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:24.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:58:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:58:24 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 06 09:58:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:24 compute-0 podman[234562]: 2025-12-06 09:58:24.578895435 +0000 UTC m=+1.099786814 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 06 09:58:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:24 compute-0 podman[234619]: 2025-12-06 09:58:24.686382137 +0000 UTC m=+0.022200114 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 06 09:58:25 compute-0 podman[234619]: 2025-12-06 09:58:25.137251217 +0000 UTC m=+0.473069214 container create a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:58:25 compute-0 python3[234548]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 06 09:58:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:25 compute-0 sudo[234546]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:25 compute-0 ceph-mon[74327]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:25 compute-0 sudo[234810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqawqzlnlazkcjokgcjgvaupvorxwew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015105.4826748-1583-115237349297634/AnsiballZ_stat.py'
Dec 06 09:58:25 compute-0 sudo[234810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 06 09:58:25 compute-0 python3.9[234812]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:25 compute-0 sudo[234810]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:26.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 06 09:58:26 compute-0 sudo[234965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxrqhpyjexfehonpzaywiwlngajnrnfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015106.4194062-1610-243679966611697/AnsiballZ_file.py'
Dec 06 09:58:26 compute-0 sudo[234965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:26 compute-0 python3.9[234967]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:26 compute-0 sudo[234965]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 09:58:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:58:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:27.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 09:58:27 compute-0 sudo[235042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmfreovwiohhjibcljwejoepgrqzbzng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015106.4194062-1610-243679966611697/AnsiballZ_stat.py'
Dec 06 09:58:27 compute-0 sudo[235042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:27 compute-0 python3.9[235044]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:27 compute-0 sudo[235042]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:27 compute-0 ceph-mon[74327]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240014e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:28 compute-0 sudo[235194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqkpyvglnbgmtkttxeomydlnaqdpper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015107.5158706-1610-162484489108603/AnsiballZ_copy.py'
Dec 06 09:58:28 compute-0 sudo[235194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:28 compute-0 python3.9[235196]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015107.5158706-1610-162484489108603/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:28 compute-0 sudo[235194]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:28 compute-0 sudo[235270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuudiazsnwuzhxbmmvhgqxvuaolofeml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015107.5158706-1610-162484489108603/AnsiballZ_systemd.py'
Dec 06 09:58:28 compute-0 sudo[235270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:28 compute-0 podman[235272]: 2025-12-06 09:58:28.610288418 +0000 UTC m=+0.071160497 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:58:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:28 compute-0 python3.9[235273]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:58:28 compute-0 systemd[1]: Reloading.
Dec 06 09:58:28 compute-0 systemd-rc-local-generator[235319]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:28 compute-0 systemd-sysv-generator[235322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:29 compute-0 sudo[235270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:29 compute-0 sudo[235336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:58:29 compute-0 sudo[235336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:29 compute-0 sudo[235336]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:29 compute-0 sudo[235427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyshgmbqpfqdvhlncngddcndyjqafqap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015107.5158706-1610-162484489108603/AnsiballZ_systemd.py'
Dec 06 09:58:29 compute-0 sudo[235427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:29 compute-0 ceph-mon[74327]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:29 compute-0 python3.9[235429]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:58:29 compute-0 systemd[1]: Reloading.
Dec 06 09:58:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:30 compute-0 systemd-rc-local-generator[235454]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:30 compute-0 systemd-sysv-generator[235460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:30 compute-0 systemd[1]: Starting multipathd container...
Dec 06 09:58:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec 06 09:58:30 compute-0 podman[235468]: 2025-12-06 09:58:30.47321045 +0000 UTC m=+0.136394820 container init a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:58:30 compute-0 multipathd[235484]: + sudo -E kolla_set_configs
Dec 06 09:58:30 compute-0 podman[235468]: 2025-12-06 09:58:30.49932835 +0000 UTC m=+0.162512700 container start a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:58:30 compute-0 podman[235468]: multipathd
Dec 06 09:58:30 compute-0 sudo[235490]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:58:30 compute-0 sudo[235490]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:58:30 compute-0 sudo[235490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:58:30 compute-0 systemd[1]: Started multipathd container.
Dec 06 09:58:30 compute-0 sudo[235427]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:30 compute-0 multipathd[235484]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:58:30 compute-0 sudo[235490]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:30 compute-0 multipathd[235484]: INFO:__main__:Validating config file
Dec 06 09:58:30 compute-0 multipathd[235484]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:58:30 compute-0 multipathd[235484]: INFO:__main__:Writing out command to execute
Dec 06 09:58:30 compute-0 multipathd[235484]: ++ cat /run_command
Dec 06 09:58:30 compute-0 multipathd[235484]: + CMD='/usr/sbin/multipathd -d'
Dec 06 09:58:30 compute-0 multipathd[235484]: + ARGS=
Dec 06 09:58:30 compute-0 multipathd[235484]: + sudo kolla_copy_cacerts
Dec 06 09:58:30 compute-0 sudo[235514]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:58:30 compute-0 podman[235491]: 2025-12-06 09:58:30.613912875 +0000 UTC m=+0.103290179 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:58:30 compute-0 sudo[235514]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:58:30 compute-0 sudo[235514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:58:30 compute-0 sudo[235514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:30 compute-0 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:58:30 compute-0 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.service: Failed with result 'exit-code'.
Dec 06 09:58:30 compute-0 multipathd[235484]: + [[ ! -n '' ]]
Dec 06 09:58:30 compute-0 multipathd[235484]: + . kolla_extend_start
Dec 06 09:58:30 compute-0 multipathd[235484]: Running command: '/usr/sbin/multipathd -d'
Dec 06 09:58:30 compute-0 multipathd[235484]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 09:58:30 compute-0 multipathd[235484]: + umask 0022
Dec 06 09:58:30 compute-0 multipathd[235484]: + exec /usr/sbin/multipathd -d
Dec 06 09:58:30 compute-0 multipathd[235484]: 3481.178802 | --------start up--------
Dec 06 09:58:30 compute-0 multipathd[235484]: 3481.178822 | read /etc/multipath.conf
Dec 06 09:58:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:30 compute-0 multipathd[235484]: 3481.185908 | path checkers start up
Dec 06 09:58:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:58:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 09:58:31 compute-0 python3.9[235674]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:31 compute-0 ceph-mon[74327]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:31 compute-0 sudo[235827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqloredvqqbrzngvsmnpwtshhozrwzqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015111.6059175-1718-226539824601323/AnsiballZ_command.py'
Dec 06 09:58:31 compute-0 sudo[235827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:32 compute-0 python3.9[235829]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:58:32 compute-0 sudo[235827]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:32 compute-0 sudo[235992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayyindspcsqcmpoiihownjgisawqftzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015112.4650128-1742-63708041774184/AnsiballZ_systemd.py'
Dec 06 09:58:32 compute-0 sudo[235992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:33 compute-0 python3.9[235994]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:58:33 compute-0 systemd[1]: Stopping multipathd container...
Dec 06 09:58:33 compute-0 multipathd[235484]: 3483.699862 | exit (signal)
Dec 06 09:58:33 compute-0 multipathd[235484]: 3483.699910 | --------shut down-------
Dec 06 09:58:33 compute-0 systemd[1]: libpod-a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.scope: Deactivated successfully.
Dec 06 09:58:33 compute-0 podman[235999]: 2025-12-06 09:58:33.189046502 +0000 UTC m=+0.088409455 container died a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:58:33 compute-0 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-5497f377768e607b.timer: Deactivated successfully.
Dec 06 09:58:33 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec 06 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-userdata-shm.mount: Deactivated successfully.
Dec 06 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4-merged.mount: Deactivated successfully.
Dec 06 09:58:33 compute-0 podman[235999]: 2025-12-06 09:58:33.462457716 +0000 UTC m=+0.361820639 container cleanup a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:58:33 compute-0 podman[235999]: multipathd
Dec 06 09:58:33 compute-0 podman[236029]: multipathd
Dec 06 09:58:33 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 06 09:58:33 compute-0 systemd[1]: Stopped multipathd container.
Dec 06 09:58:33 compute-0 systemd[1]: Starting multipathd container...
Dec 06 09:58:33 compute-0 ceph-mon[74327]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc58225c9f9d6d08a80954113af3d99ae5f8dc2b767f22a0f0f89726a1ec6a4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a.
Dec 06 09:58:33 compute-0 podman[236042]: 2025-12-06 09:58:33.840354971 +0000 UTC m=+0.276047037 container init a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:58:33 compute-0 multipathd[236057]: + sudo -E kolla_set_configs
Dec 06 09:58:33 compute-0 sudo[236063]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:58:33 compute-0 podman[236042]: 2025-12-06 09:58:33.874964382 +0000 UTC m=+0.310656418 container start a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:58:33 compute-0 sudo[236063]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:58:33 compute-0 sudo[236063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:58:33 compute-0 podman[236042]: multipathd
Dec 06 09:58:33 compute-0 systemd[1]: Started multipathd container.
Dec 06 09:58:33 compute-0 sudo[235992]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:33 compute-0 multipathd[236057]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:58:33 compute-0 multipathd[236057]: INFO:__main__:Validating config file
Dec 06 09:58:33 compute-0 multipathd[236057]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:58:33 compute-0 multipathd[236057]: INFO:__main__:Writing out command to execute
Dec 06 09:58:33 compute-0 sudo[236063]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:33 compute-0 multipathd[236057]: ++ cat /run_command
Dec 06 09:58:33 compute-0 multipathd[236057]: + CMD='/usr/sbin/multipathd -d'
Dec 06 09:58:33 compute-0 multipathd[236057]: + ARGS=
Dec 06 09:58:33 compute-0 multipathd[236057]: + sudo kolla_copy_cacerts
Dec 06 09:58:33 compute-0 sudo[236077]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:58:33 compute-0 sudo[236077]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:58:33 compute-0 sudo[236077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:58:33 compute-0 sudo[236077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:33 compute-0 multipathd[236057]: + [[ ! -n '' ]]
Dec 06 09:58:33 compute-0 multipathd[236057]: + . kolla_extend_start
Dec 06 09:58:33 compute-0 multipathd[236057]: Running command: '/usr/sbin/multipathd -d'
Dec 06 09:58:33 compute-0 multipathd[236057]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 09:58:33 compute-0 multipathd[236057]: + umask 0022
Dec 06 09:58:33 compute-0 multipathd[236057]: + exec /usr/sbin/multipathd -d
Dec 06 09:58:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:33 compute-0 podman[236064]: 2025-12-06 09:58:33.986606678 +0000 UTC m=+0.096155857 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 06 09:58:33 compute-0 multipathd[236057]: 3484.529911 | --------start up--------
Dec 06 09:58:33 compute-0 multipathd[236057]: 3484.529933 | read /etc/multipath.conf
Dec 06 09:58:33 compute-0 multipathd[236057]: 3484.537746 | path checkers start up
Dec 06 09:58:33 compute-0 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-1841828b9f18ad43.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:58:33 compute-0 systemd[1]: a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a-1841828b9f18ad43.service: Failed with result 'exit-code'.
Dec 06 09:58:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:34 compute-0 sudo[236247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmwrmcvnkmqjnowsnsofszvjussctnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015114.28125-1766-47870909947823/AnsiballZ_file.py'
Dec 06 09:58:34 compute-0 sudo[236247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:34 compute-0 python3.9[236249]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:34 compute-0 sudo[236247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:35 compute-0 sudo[236401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozwyjnggkioxsjwofsbavisklhwmypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015115.4786253-1802-168201554835627/AnsiballZ_file.py'
Dec 06 09:58:35 compute-0 sudo[236401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:35 compute-0 ceph-mon[74327]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:35 compute-0 python3.9[236403]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:58:35 compute-0 sudo[236401]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:36 compute-0 sudo[236553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xewrzfoclrmvqndhuwotndkizlehpqpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015116.3512435-1826-226685553070872/AnsiballZ_modprobe.py'
Dec 06 09:58:36 compute-0 sudo[236553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:36 compute-0 ceph-mon[74327]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:36 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 06 09:58:36 compute-0 python3.9[236555]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 06 09:58:36 compute-0 kernel: Key type psk registered
Dec 06 09:58:36 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 09:58:36 compute-0 sudo[236553]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:37.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:58:37 compute-0 sudo[236718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxbamdegeoclskzyjjvqhalrexejmdje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015117.1805418-1850-23942385384611/AnsiballZ_stat.py'
Dec 06 09:58:37 compute-0 sudo[236718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:37 compute-0 python3.9[236720]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:58:37 compute-0 sudo[236718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:38 compute-0 sudo[236841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvyjlxqocziynprcdpgcnnwgjodtlptq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015117.1805418-1850-23942385384611/AnsiballZ_copy.py'
Dec 06 09:58:38 compute-0 sudo[236841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:38 compute-0 python3.9[236843]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765015117.1805418-1850-23942385384611/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:38 compute-0 sudo[236841]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:58:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:39 compute-0 ceph-mon[74327]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:39 compute-0 sudo[236994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekmpeigmgiuyhfbcerxvktspxocpdrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015118.9295537-1898-65396024969018/AnsiballZ_lineinfile.py'
Dec 06 09:58:39 compute-0 sudo[236994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:39 compute-0 python3.9[236996]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:39 compute-0 sudo[236994]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:40 compute-0 sudo[237147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmzfdlbytgqpugniyfnpjzycbgesnwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015119.8485456-1922-142198144206585/AnsiballZ_systemd.py'
Dec 06 09:58:40 compute-0 sudo[237147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.195617) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120196284, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1210, "num_deletes": 256, "total_data_size": 2180583, "memory_usage": 2220464, "flush_reason": "Manual Compaction"}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 06 09:58:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120229339, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2138927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17739, "largest_seqno": 18948, "table_properties": {"data_size": 2133279, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11449, "raw_average_key_size": 18, "raw_value_size": 2121946, "raw_average_value_size": 3455, "num_data_blocks": 137, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015004, "oldest_key_time": 1765015004, "file_creation_time": 1765015120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 33161 microseconds, and 7105 cpu microseconds.
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.229399) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2138927 bytes OK
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.229430) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233020) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233042) EVENT_LOG_v1 {"time_micros": 1765015120233036, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2175253, prev total WAL file size 2175253, number of live WAL files 2.
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2088KB)], [38(11MB)]
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120234020, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14472882, "oldest_snapshot_seqno": -1}
Dec 06 09:58:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:40.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4955 keys, 13987153 bytes, temperature: kUnknown
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120377901, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13987153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13952304, "index_size": 21363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 126079, "raw_average_key_size": 25, "raw_value_size": 13860665, "raw_average_value_size": 2797, "num_data_blocks": 876, "num_entries": 4955, "num_filter_entries": 4955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.378283) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13987153 bytes
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.405238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.5 rd, 97.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.8 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(13.3) write-amplify(6.5) OK, records in: 5481, records dropped: 526 output_compression: NoCompression
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.405292) EVENT_LOG_v1 {"time_micros": 1765015120405269, "job": 18, "event": "compaction_finished", "compaction_time_micros": 143981, "compaction_time_cpu_micros": 37844, "output_level": 6, "num_output_files": 1, "total_output_size": 13987153, "num_input_records": 5481, "num_output_records": 4955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120405987, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015120409225, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.233837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-09:58:40.409347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 09:58:40 compute-0 python3.9[237149]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:58:40 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 09:58:40 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 06 09:58:40 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 06 09:58:40 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 06 09:58:40 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 06 09:58:40 compute-0 sudo[237147]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:58:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:58:41 compute-0 ceph-mon[74327]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:41 compute-0 sudo[237304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsluhvbhtyschfrhkfelomahpgosqljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015120.9925637-1946-40907270911782/AnsiballZ_dnf.py'
Dec 06 09:58:41 compute-0 sudo[237304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:41 compute-0 python3.9[237306]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:58:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:43 compute-0 ceph-mon[74327]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400a870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:44 compute-0 systemd[1]: Reloading.
Dec 06 09:58:44 compute-0 systemd-rc-local-generator[237341]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:44 compute-0 systemd-sysv-generator[237344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:44 compute-0 systemd[1]: Reloading.
Dec 06 09:58:44 compute-0 systemd-rc-local-generator[237375]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:44 compute-0 systemd-sysv-generator[237381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:45 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 09:58:45 compute-0 ceph-mon[74327]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:45 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 09:58:45 compute-0 lvm[237426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:58:45 compute-0 lvm[237426]: VG ceph_vg0 finished
Dec 06 09:58:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:58:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:58:45 compute-0 systemd[1]: Reloading.
Dec 06 09:58:45 compute-0 systemd-sysv-generator[237479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:45 compute-0 systemd-rc-local-generator[237476]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:58:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003470 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:46.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:46 compute-0 sudo[237304]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:58:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:58:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:58:47 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.942s CPU time.
Dec 06 09:58:47 compute-0 systemd[1]: run-r60797eb8a75a421ca9fd1bcdbde47ba3.service: Deactivated successfully.
Dec 06 09:58:47 compute-0 sudo[238768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arxjezwonnmyyqhiedlhgtyauimdzukh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015126.8169959-1970-123661275030446/AnsiballZ_systemd_service.py'
Dec 06 09:58:47 compute-0 sudo[238768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:47 compute-0 ceph-mon[74327]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:47 compute-0 python3.9[238770]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:58:47 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 06 09:58:47 compute-0 iscsid[226247]: iscsid shutting down.
Dec 06 09:58:47 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 06 09:58:47 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 06 09:58:47 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 09:58:47 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 06 09:58:47 compute-0 systemd[1]: Started Open-iSCSI.
Dec 06 09:58:47 compute-0 sudo[238768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:48 compute-0 python3.9[238926]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:58:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:49 compute-0 sudo[239050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:58:49 compute-0 ceph-mon[74327]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:49 compute-0 sudo[239050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:58:49 compute-0 sudo[239050]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:49 compute-0 sudo[239107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehovhglzysrphvvwlrfsxfwogptkyqla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015129.2262268-2022-179167504097801/AnsiballZ_file.py'
Dec 06 09:58:49 compute-0 sudo[239107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:49 compute-0 python3.9[239109]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:49 compute-0 sudo[239107]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:50 compute-0 sudo[239259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxecghmmlcpudhtaidxkobgokgwmeceq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015130.3689497-2055-262091951415878/AnsiballZ_systemd_service.py'
Dec 06 09:58:50 compute-0 sudo[239259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:58:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Dec 06 09:58:50 compute-0 python3.9[239261]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:58:50 compute-0 systemd[1]: Reloading.
Dec 06 09:58:51 compute-0 systemd-rc-local-generator[239290]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:58:51 compute-0 systemd-sysv-generator[239294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:58:51 compute-0 sudo[239259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:51 compute-0 ceph-mon[74327]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:52 compute-0 python3.9[239448]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:58:52 compute-0 network[239465]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:58:52 compute-0 network[239466]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:58:52 compute-0 network[239467]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:58:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:53 compute-0 podman[239476]: 2025-12-06 09:58:53.202627915 +0000 UTC m=+0.115105901 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:58:53 compute-0 ceph-mon[74327]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:58:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:58:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:58:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.230 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.231 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:58:54.231 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:58:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:58:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:54.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:58:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:58:55 compute-0 ceph-mon[74327]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:58:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:58:57 compute-0 ceph-mon[74327]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:58:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748002130 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:58:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:58:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:58:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:58:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:58:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:58:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:58:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:58:59 compute-0 podman[239747]: 2025-12-06 09:58:59.128819197 +0000 UTC m=+0.063104048 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:58:59 compute-0 sudo[239785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deogrzvrorcahnnwgiyconmchuksvpgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015138.7741168-2112-137210936232355/AnsiballZ_systemd_service.py'
Dec 06 09:58:59 compute-0 sudo[239785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:59 compute-0 python3.9[239791]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:58:59 compute-0 sudo[239785]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:59 compute-0 ceph-mon[74327]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:58:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:58:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:00 compute-0 sudo[239943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsznclodnmlexdkwzbgasyrfgjdvaode ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015139.6641676-2112-208216225027189/AnsiballZ_systemd_service.py'
Dec 06 09:59:00 compute-0 sudo[239943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:00 compute-0 python3.9[239945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:00 compute-0 sudo[239943]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:00.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:00 compute-0 sudo[240096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brhgdgktdybbsijoslcmzmdvjdraswve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015140.5390542-2112-184852669764407/AnsiballZ_systemd_service.py'
Dec 06 09:59:00 compute-0 sudo[240096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:01 compute-0 python3.9[240098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:01 compute-0 sudo[240096]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:01 compute-0 ceph-mon[74327]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:01 compute-0 sudo[240251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibodyhidkchzwpumpehavpxqcyvaovpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015141.4021115-2112-264762110959496/AnsiballZ_systemd_service.py'
Dec 06 09:59:01 compute-0 sudo[240251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:02 compute-0 python3.9[240253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:02 compute-0 sudo[240251]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:02 compute-0 sudo[240404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzyqqmckmswvgzvdynbylcmzejbogwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015142.3649933-2112-207185656934896/AnsiballZ_systemd_service.py'
Dec 06 09:59:02 compute-0 sudo[240404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:02 compute-0 python3.9[240406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:02 compute-0 ceph-mon[74327]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:02 compute-0 sudo[240404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:03 compute-0 sudo[240559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkoydyduvkyjhckynnsittqchomlfpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015143.1676276-2112-58281991452397/AnsiballZ_systemd_service.py'
Dec 06 09:59:03 compute-0 sudo[240559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:03 compute-0 python3.9[240561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:03 compute-0 sudo[240559]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:04 compute-0 podman[240662]: 2025-12-06 09:59:04.351229622 +0000 UTC m=+0.087894500 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:59:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:04 compute-0 sudo[240731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnfeywocbksrtyroccvaiepgaaogmpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015144.0565817-2112-103162388747375/AnsiballZ_systemd_service.py'
Dec 06 09:59:04 compute-0 sudo[240731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:04 compute-0 python3.9[240733]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:04 compute-0 sudo[240731]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:05 compute-0 ceph-mon[74327]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:05 compute-0 sudo[240885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qajvuidzdduzqzmerbyvkktwnkulvfgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015144.9408033-2112-95254170078752/AnsiballZ_systemd_service.py'
Dec 06 09:59:05 compute-0 sudo[240885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:05 compute-0 sudo[240888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:05 compute-0 sudo[240888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:05 compute-0 sudo[240888]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:05 compute-0 sudo[240914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 06 09:59:05 compute-0 sudo[240914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:05 compute-0 python3.9[240887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:59:05 compute-0 sudo[240885]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 09:59:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 09:59:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:05 compute-0 sudo[240914]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:59:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:59:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47480036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:06 compute-0 sudo[240986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:06 compute-0 sudo[240986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:06 compute-0 sudo[240986]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:06 compute-0 sudo[241011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 09:59:06 compute-0 sudo[241011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004760 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:06 compute-0 sudo[241011]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:06 compute-0 sudo[241191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxitedidfpgkalsbzdetimpuxaungtbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015146.4802668-2289-261853331164237/AnsiballZ_file.py'
Dec 06 09:59:06 compute-0 sudo[241191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 09:59:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:59:06 compute-0 sudo[241194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:06 compute-0 sudo[241194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:06 compute-0 sudo[241194]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 09:59:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 09:59:06 compute-0 sudo[241219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 09:59:06 compute-0 sudo[241219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:06 compute-0 python3.9[241193]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:06 compute-0 sudo[241191]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:07.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.425569182 +0000 UTC m=+0.051479030 container create 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:59:07 compute-0 systemd[1]: Started libpod-conmon-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope.
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.398363143 +0000 UTC m=+0.024272971 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.521273624 +0000 UTC m=+0.147183492 container init 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.53360185 +0000 UTC m=+0.159511658 container start 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.538625047 +0000 UTC m=+0.164534935 container attach 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 09:59:07 compute-0 quizzical_hamilton[241428]: 167 167
Dec 06 09:59:07 compute-0 systemd[1]: libpod-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope: Deactivated successfully.
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.542761589 +0000 UTC m=+0.168671437 container died 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 06 09:59:07 compute-0 sudo[241458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqcyzxikedzwnfaiowydgaxatkdclkas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015147.1577237-2289-280473208631225/AnsiballZ_file.py'
Dec 06 09:59:07 compute-0 sudo[241458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef427d50911734f20af4cd75c39f056991c44506908268126766b141067456e4-merged.mount: Deactivated successfully.
Dec 06 09:59:07 compute-0 podman[241387]: 2025-12-06 09:59:07.608103765 +0000 UTC m=+0.234013613 container remove 7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 09:59:07 compute-0 systemd[1]: libpod-conmon-7a89432a86fb34da77232037d83a6227bb47e170fe4188f07b99545c1419081c.scope: Deactivated successfully.
Dec 06 09:59:07 compute-0 podman[241481]: 2025-12-06 09:59:07.792626512 +0000 UTC m=+0.054539343 container create e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:59:07 compute-0 python3.9[241468]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:07 compute-0 sudo[241458]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:07 compute-0 systemd[1]: Started libpod-conmon-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope.
Dec 06 09:59:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:07 compute-0 podman[241481]: 2025-12-06 09:59:07.7645923 +0000 UTC m=+0.026505081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:07 compute-0 podman[241481]: 2025-12-06 09:59:07.900899147 +0000 UTC m=+0.162811978 container init e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 09:59:07 compute-0 podman[241481]: 2025-12-06 09:59:07.912151913 +0000 UTC m=+0.174064684 container start e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 09:59:07 compute-0 podman[241481]: 2025-12-06 09:59:07.918395842 +0000 UTC m=+0.180308653 container attach e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 09:59:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:08 compute-0 elegant_antonelli[241497]: --> passed data devices: 0 physical, 1 LVM
Dec 06 09:59:08 compute-0 elegant_antonelli[241497]: --> All data devices are unavailable
Dec 06 09:59:08 compute-0 sudo[241661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzsudjxdynakjtupqqvqcgbcgibxythd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015147.9766946-2289-215602701060445/AnsiballZ_file.py'
Dec 06 09:59:08 compute-0 sudo[241661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:08 compute-0 systemd[1]: libpod-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope: Deactivated successfully.
Dec 06 09:59:08 compute-0 podman[241481]: 2025-12-06 09:59:08.286733158 +0000 UTC m=+0.548645979 container died e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 09:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1807f8a4fa6a4bbf6ebf8fb087478ce5a9ba29aa9ddc0332189fc23279cb8f3e-merged.mount: Deactivated successfully.
Dec 06 09:59:08 compute-0 podman[241481]: 2025-12-06 09:59:08.3387025 +0000 UTC m=+0.600615271 container remove e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:59:08 compute-0 systemd[1]: libpod-conmon-e7e1ab7eac14b3932937b343bbf60e14948d877566de3cd6572aa87050f02bac.scope: Deactivated successfully.
Dec 06 09:59:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:08 compute-0 sudo[241219]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:08.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:08 compute-0 sudo[241676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:08 compute-0 sudo[241676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:08 compute-0 sudo[241676]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:08 compute-0 python3.9[241663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:08 compute-0 sudo[241661]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:08 compute-0 sudo[241701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 09:59:08 compute-0 sudo[241701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:59:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:08 compute-0 sudo[241916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqlaybohohhwhbsyuuuxpunwvwcqolnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015148.6318438-2289-102759174207898/AnsiballZ_file.py'
Dec 06 09:59:08 compute-0 sudo[241916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:08.999833466 +0000 UTC m=+0.048000376 container create da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:59:09 compute-0 systemd[1]: Started libpod-conmon-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope.
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:08.978868066 +0000 UTC m=+0.027035006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:09.109724135 +0000 UTC m=+0.157891075 container init da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:09.119446649 +0000 UTC m=+0.167613549 container start da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:09.124008463 +0000 UTC m=+0.172175383 container attach da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec 06 09:59:09 compute-0 peaceful_visvesvaraya[241937]: 167 167
Dec 06 09:59:09 compute-0 systemd[1]: libpod-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope: Deactivated successfully.
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:09.129602315 +0000 UTC m=+0.177769235 container died da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 09:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a097b1fe085ed5cd0054c0c5cdb83eec5720f7edda4dbbb59ad91fa0265a62da-merged.mount: Deactivated successfully.
Dec 06 09:59:09 compute-0 podman[241919]: 2025-12-06 09:59:09.177321823 +0000 UTC m=+0.225488763 container remove da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 09:59:09 compute-0 systemd[1]: libpod-conmon-da2457ba4cac89e958ba4b18d6ebfc8d34c7514fd4e752ad7bdb63b59f871ce4.scope: Deactivated successfully.
Dec 06 09:59:09 compute-0 python3.9[241921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:09 compute-0 sudo[241916]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:09 compute-0 ceph-mon[74327]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.377107174 +0000 UTC m=+0.053767742 container create 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 09:59:09 compute-0 systemd[1]: Started libpod-conmon-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope.
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.357258335 +0000 UTC m=+0.033918933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.475927491 +0000 UTC m=+0.152588079 container init 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.48654982 +0000 UTC m=+0.163210388 container start 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.495593656 +0000 UTC m=+0.172254224 container attach 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:59:09 compute-0 sudo[242091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:59:09 compute-0 sudo[242091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:09 compute-0 sudo[242091]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:09 compute-0 sudo[242159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhiinzccurvkuzumkzzzrohunajdyvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015149.369445-2289-112661835680157/AnsiballZ_file.py'
Dec 06 09:59:09 compute-0 sudo[242159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:09 compute-0 zen_yalow[242056]: {
Dec 06 09:59:09 compute-0 zen_yalow[242056]:     "1": [
Dec 06 09:59:09 compute-0 zen_yalow[242056]:         {
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "devices": [
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "/dev/loop3"
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             ],
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "lv_name": "ceph_lv0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "lv_size": "21470642176",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "name": "ceph_lv0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "tags": {
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.cluster_name": "ceph",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.crush_device_class": "",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.encrypted": "0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.osd_id": "1",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.type": "block",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.vdo": "0",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:                 "ceph.with_tpm": "0"
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             },
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "type": "block",
Dec 06 09:59:09 compute-0 zen_yalow[242056]:             "vg_name": "ceph_vg0"
Dec 06 09:59:09 compute-0 zen_yalow[242056]:         }
Dec 06 09:59:09 compute-0 zen_yalow[242056]:     ]
Dec 06 09:59:09 compute-0 zen_yalow[242056]: }
Dec 06 09:59:09 compute-0 systemd[1]: libpod-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope: Deactivated successfully.
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.800748043 +0000 UTC m=+0.477408611 container died 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Dec 06 09:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b7125950458ad9ed11567744bf910efee27eff353ec30ce191e9fd72e8e1fd-merged.mount: Deactivated successfully.
Dec 06 09:59:09 compute-0 podman[241986]: 2025-12-06 09:59:09.848179082 +0000 UTC m=+0.524839650 container remove 771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 09:59:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:09 compute-0 systemd[1]: libpod-conmon-771f756ca943b94f0c1d575f99f2e6ea29f90ea9925f740488c35536842bca57.scope: Deactivated successfully.
Dec 06 09:59:09 compute-0 python3.9[242161]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:09 compute-0 sudo[242159]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:09 compute-0 sudo[241701]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:09 compute-0 sudo[242178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:09 compute-0 sudo[242178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:09 compute-0 sudo[242178]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:10 compute-0 sudo[242227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 09:59:10 compute-0 sudo[242227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:10 compute-0 sudo[242398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkjnmgglfkweiyvkcbubntwpdupaalo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015150.0301328-2289-52283573163993/AnsiballZ_file.py'
Dec 06 09:59:10 compute-0 sudo[242398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:10.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:10 compute-0 podman[242423]: 2025-12-06 09:59:10.432603203 +0000 UTC m=+0.046569517 container create c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 09:59:10 compute-0 systemd[1]: Started libpod-conmon-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope.
Dec 06 09:59:10 compute-0 python3.9[242407]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:10 compute-0 sudo[242398]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:10 compute-0 podman[242423]: 2025-12-06 09:59:10.409726381 +0000 UTC m=+0.023692685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:10 compute-0 podman[242423]: 2025-12-06 09:59:10.521623404 +0000 UTC m=+0.135589708 container init c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 09:59:10 compute-0 podman[242423]: 2025-12-06 09:59:10.531770669 +0000 UTC m=+0.145736953 container start c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:59:10 compute-0 podman[242423]: 2025-12-06 09:59:10.535213713 +0000 UTC m=+0.149179997 container attach c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:59:10 compute-0 boring_satoshi[242439]: 167 167
Dec 06 09:59:10 compute-0 systemd[1]: libpod-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope: Deactivated successfully.
Dec 06 09:59:10 compute-0 podman[242461]: 2025-12-06 09:59:10.585096829 +0000 UTC m=+0.032374141 container died c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 09:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a05893966a1e760099750fd8cc9a9f0b93a5af647543e0d710761ddfe03f1d9-merged.mount: Deactivated successfully.
Dec 06 09:59:10 compute-0 podman[242461]: 2025-12-06 09:59:10.620948344 +0000 UTC m=+0.068225626 container remove c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 09:59:10 compute-0 systemd[1]: libpod-conmon-c7e019eb96aca191cb68d9cf0df0fa6cce4c151603ee47688b5bfa5645dae5ca.scope: Deactivated successfully.
Dec 06 09:59:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:10 compute-0 podman[242514]: 2025-12-06 09:59:10.890164304 +0000 UTC m=+0.053536297 container create 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 09:59:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:59:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:59:10 compute-0 systemd[1]: Started libpod-conmon-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope.
Dec 06 09:59:10 compute-0 podman[242514]: 2025-12-06 09:59:10.870734205 +0000 UTC m=+0.034106218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 09:59:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 09:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:59:11 compute-0 podman[242514]: 2025-12-06 09:59:11.000335219 +0000 UTC m=+0.163707242 container init 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 09:59:11 compute-0 podman[242514]: 2025-12-06 09:59:11.00991914 +0000 UTC m=+0.173291133 container start 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 09:59:11 compute-0 podman[242514]: 2025-12-06 09:59:11.013947879 +0000 UTC m=+0.177319902 container attach 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:59:11 compute-0 sudo[242639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skfydkxtjypxedvgctvvwmjyvpvtjnbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015150.845174-2289-256490597781338/AnsiballZ_file.py'
Dec 06 09:59:11 compute-0 sudo[242639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:11 compute-0 python3.9[242647]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:11 compute-0 sudo[242639]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:11 compute-0 ceph-mon[74327]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:11 compute-0 lvm[242812]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:59:11 compute-0 lvm[242812]: VG ceph_vg0 finished
Dec 06 09:59:11 compute-0 vigilant_noyce[242560]: {}
Dec 06 09:59:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:11 compute-0 systemd[1]: libpod-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Deactivated successfully.
Dec 06 09:59:11 compute-0 podman[242514]: 2025-12-06 09:59:11.88781701 +0000 UTC m=+1.051189033 container died 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 09:59:11 compute-0 systemd[1]: libpod-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Consumed 1.416s CPU time.
Dec 06 09:59:11 compute-0 sudo[242864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztejzoiwlesphllzosbkbhkokfdcsxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015151.5794568-2289-83586415859573/AnsiballZ_file.py'
Dec 06 09:59:11 compute-0 sudo[242864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-731972f4b213a4afbb82074f2109006d27761757140cf9dd52cc5b3f97302def-merged.mount: Deactivated successfully.
Dec 06 09:59:11 compute-0 podman[242514]: 2025-12-06 09:59:11.950238897 +0000 UTC m=+1.113610930 container remove 4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 09:59:11 compute-0 systemd[1]: libpod-conmon-4d7ee467a5da1c71eae61e2bde4a4745f1f5cdcea00e38efa49b03e8ca75b461.scope: Deactivated successfully.
Dec 06 09:59:12 compute-0 sudo[242227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 09:59:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 09:59:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:12 compute-0 python3.9[242873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:12 compute-0 sudo[242864]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:12 compute-0 sudo[242880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:59:12 compute-0 sudo[242880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:12 compute-0 sudo[242880]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:12 compute-0 sudo[243054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wijeyvaeqyfuakgidwpdrciijmlbeiql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015152.3372145-2460-269318533622484/AnsiballZ_file.py'
Dec 06 09:59:12 compute-0 sudo[243054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:12 compute-0 python3.9[243056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:12 compute-0 sudo[243054]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 09:59:13 compute-0 ceph-mon[74327]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:13 compute-0 sudo[243207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjgzydfjalqxzzqgpegvgsugrlikenrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015153.081-2460-5452673256960/AnsiballZ_file.py'
Dec 06 09:59:13 compute-0 sudo[243207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:13 compute-0 python3.9[243209]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:13 compute-0 sudo[243207]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300047a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:14 compute-0 sudo[243360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgqwoibpqmlrjyhdhnlnueknnedadix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015153.7300477-2460-49288428661129/AnsiballZ_file.py'
Dec 06 09:59:14 compute-0 sudo[243360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:14 compute-0 python3.9[243362]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:14 compute-0 sudo[243360]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:14 compute-0 sudo[243512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvyqsxedzegivlfipingcgvihahitbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015154.3454278-2460-242350085667874/AnsiballZ_file.py'
Dec 06 09:59:14 compute-0 sudo[243512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:14 compute-0 python3.9[243514]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:14 compute-0 sudo[243512]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:15 compute-0 ceph-mon[74327]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:15 compute-0 sudo[243666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upqwohbvshefipysodzhyfkzxotuhiqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015154.9867911-2460-129392464760109/AnsiballZ_file.py'
Dec 06 09:59:15 compute-0 sudo[243666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:15 compute-0 python3.9[243668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:15 compute-0 sudo[243666]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:16 compute-0 sudo[243819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilejvqakjlpcginabuuxmdnruapzszis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015155.8212361-2460-243634391336449/AnsiballZ_file.py'
Dec 06 09:59:16 compute-0 sudo[243819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:16 compute-0 python3.9[243821]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:16 compute-0 sudo[243819]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:16.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:16 compute-0 sudo[243971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqgzomgtmebldstzazalynltevwiirlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015156.5653021-2460-281241048135023/AnsiballZ_file.py'
Dec 06 09:59:16 compute-0 sudo[243971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:17.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:17 compute-0 python3.9[243973]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:17 compute-0 sudo[243971]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:17 compute-0 ceph-mon[74327]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:17 compute-0 sudo[244125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyiudryxgvhaoekkfcqeikdyhlmsyyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015157.3105376-2460-75112352423754/AnsiballZ_file.py'
Dec 06 09:59:17 compute-0 sudo[244125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:17 compute-0 python3.9[244127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:17 compute-0 sudo[244125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:18.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:18 compute-0 sudo[244278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevaxgraikuynlagcufgvdaaynxxexok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015158.1929815-2634-231532608607879/AnsiballZ_command.py'
Dec 06 09:59:18 compute-0 sudo[244278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:18 compute-0 python3.9[244280]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:18 compute-0 sudo[244278]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:19 compute-0 ceph-mon[74327]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:19 compute-0 python3.9[244434]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:59:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:20 compute-0 sudo[244584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qutwakbqzqjwxdtrnxjvnpqdbsscyrfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015160.0489085-2688-228607735653684/AnsiballZ_systemd_service.py'
Dec 06 09:59:20 compute-0 sudo[244584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:20 compute-0 python3.9[244586]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:59:20 compute-0 systemd[1]: Reloading.
Dec 06 09:59:20 compute-0 systemd-sysv-generator[244612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:59:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:59:20 compute-0 systemd-rc-local-generator[244608]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:59:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 09:59:21 compute-0 sudo[244584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:21 compute-0 ceph-mon[74327]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:21 compute-0 sudo[244773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daaatxhmeocpnutpbnbeechmqlmvrrzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015161.4185374-2712-99844719134479/AnsiballZ_command.py'
Dec 06 09:59:21 compute-0 sudo[244773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:21 compute-0 python3.9[244775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:21 compute-0 sudo[244773]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:22.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:22 compute-0 sudo[244926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bksszxnwgibchcnrvcjduoomivqcpsif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015162.1569517-2712-86268339801854/AnsiballZ_command.py'
Dec 06 09:59:22 compute-0 sudo[244926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:22 compute-0 python3.9[244928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:22 compute-0 sudo[244926]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:23 compute-0 sudo[245080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsqafywtsatwhndcbujhcxtqdfwvsne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015162.8287346-2712-68078536113426/AnsiballZ_command.py'
Dec 06 09:59:23 compute-0 sudo[245080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:23 compute-0 python3.9[245082]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:23 compute-0 sudo[245080]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:23 compute-0 podman[245084]: 2025-12-06 09:59:23.435342284 +0000 UTC m=+0.096262588 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:59:23 compute-0 ceph-mon[74327]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:23 compute-0 sudo[245261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztdejkgtmoslivvbwxiowbnaqhpibzhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015163.491268-2712-201304505847870/AnsiballZ_command.py'
Dec 06 09:59:23 compute-0 sudo[245261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_09:59:23
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'vms', '.nfs', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 09:59:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:59:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:23 compute-0 python3.9[245263]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:23 compute-0 sudo[245261]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 09:59:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 09:59:24 compute-0 sudo[245414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgblwqxfvguazscczdlbjraclcpyqxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015164.3615406-2712-85259981735592/AnsiballZ_command.py'
Dec 06 09:59:24 compute-0 sudo[245414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754001320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:24 compute-0 python3.9[245416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:24 compute-0 sudo[245414]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:25 compute-0 sudo[245568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofledxqswpjjltzvbimqafexebjctbzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015164.9633546-2712-153682223836641/AnsiballZ_command.py'
Dec 06 09:59:25 compute-0 sudo[245568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:25 compute-0 python3.9[245570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:25 compute-0 sudo[245568]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:25 compute-0 ceph-mon[74327]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:25 compute-0 sudo[245722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaocccesyjvfbjmimxdxzyntqhuutebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015165.6000159-2712-86815001911912/AnsiballZ_command.py'
Dec 06 09:59:25 compute-0 sudo[245722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:26 compute-0 python3.9[245724]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:26 compute-0 sudo[245722]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:26.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:26 compute-0 sudo[245875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rihlbfmycnygqpagfixtzylzflcwieyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015166.2370982-2712-168515965753444/AnsiballZ_command.py'
Dec 06 09:59:26 compute-0 sudo[245875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:26 compute-0 python3.9[245877]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:59:26 compute-0 sudo[245875]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:27.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:27 compute-0 ceph-mon[74327]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:28 compute-0 sudo[246030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvpdlthdbeqkfqmxelgfmcfrkkezitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015168.133444-2919-56992852623661/AnsiballZ_file.py'
Dec 06 09:59:28 compute-0 sudo[246030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:28 compute-0 python3.9[246032]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:28 compute-0 sudo[246030]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:28 compute-0 ceph-mon[74327]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:29 compute-0 sudo[246183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfcbhycdyvavjfvwumdahhqgrqjwagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015168.8907263-2919-256020905861877/AnsiballZ_file.py'
Dec 06 09:59:29 compute-0 sudo[246183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:29 compute-0 podman[246185]: 2025-12-06 09:59:29.245159751 +0000 UTC m=+0.057073552 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 06 09:59:29 compute-0 python3.9[246186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:29 compute-0 sudo[246183]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:29 compute-0 sudo[246274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:59:29 compute-0 sudo[246274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:29 compute-0 sudo[246274]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:29 compute-0 sudo[246380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgmptfjrcqqvirdwtsuijnczbqoengt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015169.5743828-2919-143854426835558/AnsiballZ_file.py'
Dec 06 09:59:29 compute-0 sudo[246380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:30 compute-0 python3.9[246382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:30 compute-0 sudo[246380]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:30.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:30.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:30 compute-0 sudo[246532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooomitwaxxzhpinozjqdzjffwzvcpubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015170.404671-2985-36611991518597/AnsiballZ_file.py'
Dec 06 09:59:30 compute-0 sudo[246532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:59:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 09:59:30 compute-0 python3.9[246534]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:31 compute-0 sudo[246532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:31 compute-0 ceph-mon[74327]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:31 compute-0 sudo[246686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpyyzqtzmisikvtjyyegezprqhxoyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015171.170623-2985-33165209451333/AnsiballZ_file.py'
Dec 06 09:59:31 compute-0 sudo[246686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:31 compute-0 python3.9[246688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:31 compute-0 sudo[246686]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:32 compute-0 sudo[246838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aythbygsomkoovyntkkcfnzdvuiyoagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015171.8415635-2985-131078508707862/AnsiballZ_file.py'
Dec 06 09:59:32 compute-0 sudo[246838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:32 compute-0 python3.9[246840]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:32 compute-0 sudo[246838]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:32 compute-0 sudo[246990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdoxpedviswappxyrwosqjzwnstuafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015172.467213-2985-257352133456490/AnsiballZ_file.py'
Dec 06 09:59:32 compute-0 sudo[246990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:32 compute-0 python3.9[246992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:32 compute-0 sudo[246990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:33 compute-0 ceph-mon[74327]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:33 compute-0 sudo[247144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibomjphikpfqdekdnrtakzxzxeafnmkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015173.1984837-2985-273582274257452/AnsiballZ_file.py'
Dec 06 09:59:33 compute-0 sudo[247144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:33 compute-0 python3.9[247146]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:33 compute-0 sudo[247144]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:34 compute-0 sudo[247296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umymsylhjwdewjdxxrmcmvfatnujvkeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015173.8322353-2985-48392861338134/AnsiballZ_file.py'
Dec 06 09:59:34 compute-0 sudo[247296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:34 compute-0 python3.9[247298]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:34 compute-0 sudo[247296]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:34.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:34.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:34 compute-0 sudo[247461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbzirjevethilczfgzfikkvjnekqpbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015174.4563117-2985-240116308187617/AnsiballZ_file.py'
Dec 06 09:59:34 compute-0 sudo[247461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:34 compute-0 podman[247422]: 2025-12-06 09:59:34.75749909 +0000 UTC m=+0.088776255 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:59:34 compute-0 python3.9[247468]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:34 compute-0 sudo[247461]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:35 compute-0 ceph-mon[74327]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:37.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:37 compute-0 ceph-mon[74327]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:59:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 09:59:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 09:59:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 09:59:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:38.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:59:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:39 compute-0 sshd-session[247500]: banner exchange: Connection from 47.93.97.12 port 56394: invalid format
Dec 06 09:59:39 compute-0 ceph-mon[74327]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 09:59:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:40.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:40 compute-0 sudo[247627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyovakchaeampwdvwyzickijaqnnqxyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015180.37187-3310-152927359128948/AnsiballZ_getent.py'
Dec 06 09:59:40 compute-0 sudo[247627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:40 compute-0 python3.9[247629]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 06 09:59:40 compute-0 sudo[247627]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:41 compute-0 ceph-mon[74327]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002e50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:42 compute-0 sudo[247782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gskwlvcqreuzqercgwaupknilzzxnvgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015181.4390197-3334-151371688853364/AnsiballZ_group.py'
Dec 06 09:59:42 compute-0 sudo[247782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:42 compute-0 python3.9[247784]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:59:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:42 compute-0 groupadd[247785]: group added to /etc/group: name=nova, GID=42436
Dec 06 09:59:42 compute-0 groupadd[247785]: group added to /etc/gshadow: name=nova
Dec 06 09:59:42 compute-0 groupadd[247785]: new group: name=nova, GID=42436
Dec 06 09:59:42 compute-0 sudo[247782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:43 compute-0 sudo[247940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naenyrmoalxsyxnyeywelcuftdvkdsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015182.5070484-3358-148653759081448/AnsiballZ_user.py'
Dec 06 09:59:43 compute-0 sudo[247940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:43 compute-0 python3.9[247942]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:59:43 compute-0 useradd[247945]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 06 09:59:43 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:59:43 compute-0 useradd[247945]: add 'nova' to group 'libvirt'
Dec 06 09:59:43 compute-0 useradd[247945]: add 'nova' to shadow group 'libvirt'
Dec 06 09:59:43 compute-0 sudo[247940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:43 compute-0 ceph-mon[74327]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:44 compute-0 sshd-session[247978]: Accepted publickey for zuul from 192.168.122.30 port 37076 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 09:59:44 compute-0 systemd-logind[795]: New session 55 of user zuul.
Dec 06 09:59:44 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 06 09:59:44 compute-0 sshd-session[247978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 09:59:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:44.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:44 compute-0 sshd-session[247981]: Received disconnect from 192.168.122.30 port 37076:11: disconnected by user
Dec 06 09:59:44 compute-0 sshd-session[247981]: Disconnected from user zuul 192.168.122.30 port 37076
Dec 06 09:59:44 compute-0 sshd-session[247978]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:59:44 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 06 09:59:44 compute-0 systemd-logind[795]: Session 55 logged out. Waiting for processes to exit.
Dec 06 09:59:44 compute-0 systemd-logind[795]: Removed session 55.
Dec 06 09:59:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:45 compute-0 python3.9[248132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:45 compute-0 ceph-mon[74327]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:45 compute-0 python3.9[248254]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015184.768256-3433-224706275421914/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4748004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:46.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:46 compute-0 python3.9[248404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:46 compute-0 python3.9[248480]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:47.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:47 compute-0 ceph-mon[74327]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 09:59:47 compute-0 python3.9[248632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754008dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:59:48 compute-0 python3.9[248754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015187.1660318-3433-22739062391989/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:48.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:48 compute-0 python3.9[248904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:49 compute-0 python3.9[249026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015188.4549277-3433-228415729516958/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:49 compute-0 ceph-mon[74327]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 09:59:49 compute-0 sudo[249104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 09:59:49 compute-0 sudo[249104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 09:59:49 compute-0 sudo[249104]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:50 compute-0 python3.9[249202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:50.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:50 compute-0 python3.9[249326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015189.6680846-3433-222025138862778/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 09:59:51 compute-0 python3.9[249476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:51 compute-0 ceph-mon[74327]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:51 compute-0 python3.9[249599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015190.7157543-3433-6976051288526/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/095951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 09:59:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:52 compute-0 sudo[249749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcywfsgprzmpovwrkxpfeccnblcpmzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015192.5700903-3682-51967157630182/AnsiballZ_file.py'
Dec 06 09:59:52 compute-0 sudo[249749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:53 compute-0 python3.9[249751]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:53 compute-0 sudo[249749]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:53 compute-0 ceph-mon[74327]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:53 compute-0 sudo[249914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjfbnaolvourvkdyhaphwyuqloqxmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015193.356391-3706-259415426909008/AnsiballZ_copy.py'
Dec 06 09:59:53 compute-0 sudo[249914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:53 compute-0 podman[249877]: 2025-12-06 09:59:53.763753155 +0000 UTC m=+0.125435231 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 09:59:53 compute-0 python3.9[249920]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:59:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:53 compute-0 sudo[249914]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 09:59:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 09:59:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 09:59:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.232 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:59:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.233 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:59:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 09:59:54.233 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:59:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:54 compute-0 sudo[250080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksgwzyahcfuqvwmqagyqfpzudzpjeies ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015194.1610506-3730-9632512060940/AnsiballZ_stat.py'
Dec 06 09:59:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:54 compute-0 sudo[250080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:54 compute-0 python3.9[250082]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:59:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 09:59:54 compute-0 sudo[250080]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 09:59:55 compute-0 sudo[250233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkbhvqewnkgjlpwqbhxhsfjbksxjlwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015194.9023266-3754-56958980149764/AnsiballZ_stat.py'
Dec 06 09:59:55 compute-0 sudo[250233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:55 compute-0 python3.9[250235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:55 compute-0 sudo[250233]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:55 compute-0 ceph-mon[74327]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:55 compute-0 sudo[250357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxsoxbrmlgtgxlferkjlwxzpgyiedpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015194.9023266-3754-56958980149764/AnsiballZ_copy.py'
Dec 06 09:59:55 compute-0 sudo[250357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:59:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:56 compute-0 python3.9[250359]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765015194.9023266-3754-56958980149764/.source _original_basename=.v9d0ml7r follow=False checksum=f3bb099f0d435dfeb4ffdbf95408e4ce4967ba08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 06 09:59:56 compute-0 sudo[250357]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:56.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:57 compute-0 python3.9[250511]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:59:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T09:59:57.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 09:59:57 compute-0 ceph-mon[74327]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 09:59:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:57 compute-0 python3.9[250665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:59:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 09:59:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:09:59:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 09:59:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 09:59:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 09:59:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:09:59:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 09:59:58 compute-0 python3.9[250786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015197.3150108-3832-233630314235761/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=81f1f28d070b2613355f782b83a5777fdba9540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 09:59:59 compute-0 python3.9[250937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:59:59 compute-0 podman[250981]: 2025-12-06 09:59:59.467035335 +0000 UTC m=+0.085791763 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:59:59 compute-0 ceph-mon[74327]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 09:59:59 compute-0 python3.9[251078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015198.7267964-3877-124238013007772/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=2efe6ae78bce1c26d2c384be079fa366810076ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:59:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 09:59:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 09:59:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 10:00:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:00 compute-0 sudo[251228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijmdiqcgvrwpplygjjumnfvgbdtjnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015200.396475-3928-265655724471190/AnsiballZ_container_config_data.py'
Dec 06 10:00:00 compute-0 sudo[251228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:00 compute-0 ceph-mon[74327]: overall HEALTH_OK
Dec 06 10:00:00 compute-0 python3.9[251230]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 06 10:00:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:00 compute-0 sudo[251228]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:01 compute-0 sudo[251382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzmiojvjauxloabmauwykmfzaphyrvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015201.2707722-3955-113732555072247/AnsiballZ_container_config_hash.py'
Dec 06 10:00:01 compute-0 sudo[251382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:01 compute-0 ceph-mon[74327]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:01 compute-0 python3.9[251384]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 10:00:01 compute-0 sudo[251382]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:02 compute-0 sudo[251534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfhjurdypywhzaooqnidxyrzdawvmle ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015202.2651806-3985-230199526851982/AnsiballZ_edpm_container_manage.py'
Dec 06 10:00:02 compute-0 sudo[251534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:02 compute-0 python3[251536]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 10:00:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:00:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:00:03 compute-0 ceph-mon[74327]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:05 compute-0 ceph-mon[74327]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:00:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:00:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:00:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:07.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:00:07 compute-0 ceph-mon[74327]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:07 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:00:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:08.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:00:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:09 compute-0 podman[251592]: 2025-12-06 10:00:09.34551541 +0000 UTC m=+4.263087914 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:00:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:09 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750002990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:09 compute-0 sudo[251639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:00:09 compute-0 sudo[251639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:09 compute-0 sudo[251639]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:00:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:10.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 10:00:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 10:00:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100011 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:00:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:11 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:00:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:12 compute-0 sudo[251669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:00:12 compute-0 sudo[251669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:12 compute-0 sudo[251669]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:12 compute-0 sudo[251694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:00:12 compute-0 sudo[251694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:13 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:00:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:15 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:16.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:00:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:00:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:17.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:00:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:17 compute-0 ceph-mon[74327]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:00:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:00:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:17 compute-0 podman[251550]: 2025-12-06 10:00:17.692974484 +0000 UTC m=+14.728794853 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 06 10:00:17 compute-0 podman[251768]: 2025-12-06 10:00:17.841350429 +0000 UTC m=+0.049041434 container create 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible)
Dec 06 10:00:17 compute-0 podman[251768]: 2025-12-06 10:00:17.815572848 +0000 UTC m=+0.023263843 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 06 10:00:17 compute-0 python3[251536]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 06 10:00:17 compute-0 sudo[251694]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:17 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:17 compute-0 sudo[251534]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:00:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:00:18 compute-0 sudo[251924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:00:18 compute-0 sudo[251924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:18 compute-0 sudo[251924]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:18 compute-0 sudo[251998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhkfswvkjozplkepqscbdgynoepdroma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015218.1734648-4009-80068366367121/AnsiballZ_stat.py'
Dec 06 10:00:18 compute-0 sudo[251998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:18 compute-0 sudo[251985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:00:18 compute-0 sudo[251985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:00:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:00:18 compute-0 python3.9[252016]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 10:00:18 compute-0 sudo[251998]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:18 compute-0 podman[252088]: 2025-12-06 10:00:18.930915474 +0000 UTC m=+0.053851825 container create ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:00:18 compute-0 systemd[1]: Started libpod-conmon-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope.
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:18.907805946 +0000 UTC m=+0.030742317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:19.032935418 +0000 UTC m=+0.155871779 container init ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:19.041290835 +0000 UTC m=+0.164227166 container start ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:19.045439568 +0000 UTC m=+0.168375929 container attach ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:00:19 compute-0 competent_easley[252104]: 167 167
Dec 06 10:00:19 compute-0 systemd[1]: libpod-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope: Deactivated successfully.
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:19.052240363 +0000 UTC m=+0.175176694 container died ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 10:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d26432cbe5eaa7238276b3456e50ab7179bd7e01b4b23fc89c744f2dbfc67c56-merged.mount: Deactivated successfully.
Dec 06 10:00:19 compute-0 podman[252088]: 2025-12-06 10:00:19.094813081 +0000 UTC m=+0.217749412 container remove ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_easley, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 10:00:19 compute-0 systemd[1]: libpod-conmon-ee4c98b3d0f816bfb35a99c6def71cb2134c007963d19b22d247b24f9197c45c.scope: Deactivated successfully.
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.268677748 +0000 UTC m=+0.047364499 container create 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:00:19 compute-0 systemd[1]: Started libpod-conmon-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope.
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.249007613 +0000 UTC m=+0.027694394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.363053924 +0000 UTC m=+0.141740695 container init 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.37026415 +0000 UTC m=+0.148950911 container start 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.374998779 +0000 UTC m=+0.153685540 container attach 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:00:19 compute-0 sudo[252275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjavrhiozsndkgzcrzdiuiwkzzzwssdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015219.291369-4045-240716102753964/AnsiballZ_container_config_data.py'
Dec 06 10:00:19 compute-0 sudo[252275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:19 compute-0 ceph-mon[74327]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:00:19 compute-0 exciting_haibt[252173]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:00:19 compute-0 exciting_haibt[252173]: --> All data devices are unavailable
Dec 06 10:00:19 compute-0 systemd[1]: libpod-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope: Deactivated successfully.
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.74505822 +0000 UTC m=+0.523744971 container died 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac61614630103872b935ea633ebaf6b20ed9c2fe9cf0da48f50e0bd6634779c3-merged.mount: Deactivated successfully.
Dec 06 10:00:19 compute-0 python3.9[252279]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 06 10:00:19 compute-0 podman[252130]: 2025-12-06 10:00:19.792609633 +0000 UTC m=+0.571296384 container remove 26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_haibt, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:00:19 compute-0 sudo[252275]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:19 compute-0 systemd[1]: libpod-conmon-26ceef56cf6dbed78bb83a9a064f1e5647c78ca11f1e541a6c9b93ff23cb0a40.scope: Deactivated successfully.
Dec 06 10:00:19 compute-0 sudo[251985]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:19 compute-0 sudo[252303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:00:19 compute-0 sudo[252303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:19 compute-0 sudo[252303]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:19 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:19 compute-0 sudo[252348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:00:19 compute-0 sudo[252348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750004430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:20 compute-0 sudo[252546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fambczrhgxgatuxshivnrcbwfkxnxykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015220.045931-4072-17121434629454/AnsiballZ_container_config_hash.py'
Dec 06 10:00:20 compute-0 sudo[252546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:20 compute-0 podman[252529]: 2025-12-06 10:00:20.31181341 +0000 UTC m=+0.021908147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 10:00:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.274993439 +0000 UTC m=+0.985088146 container create c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 10:00:21 compute-0 python3.9[252555]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 10:00:21 compute-0 sudo[252546]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:21 compute-0 systemd[1]: Started libpod-conmon-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope.
Dec 06 10:00:21 compute-0 ceph-mon[74327]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.367315699 +0000 UTC m=+1.077410426 container init c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.3739679 +0000 UTC m=+1.084062607 container start c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.377770684 +0000 UTC m=+1.087865391 container attach c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 10:00:21 compute-0 quizzical_spence[252560]: 167 167
Dec 06 10:00:21 compute-0 systemd[1]: libpod-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope: Deactivated successfully.
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.380146778 +0000 UTC m=+1.090241485 container died c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 10:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b25d56b2be0a50fe634c11d678d2c14a5b39f1eeba087030195356adb6d38ea-merged.mount: Deactivated successfully.
Dec 06 10:00:21 compute-0 podman[252529]: 2025-12-06 10:00:21.416308762 +0000 UTC m=+1.126403469 container remove c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_spence, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:00:21 compute-0 systemd[1]: libpod-conmon-c1a6b6fdec8dca0aee36210a0c00abf2b0f2699143e0eb03fec3c15a364e4439.scope: Deactivated successfully.
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.574328508 +0000 UTC m=+0.043237277 container create 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 10:00:21 compute-0 systemd[1]: Started libpod-conmon-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope.
Dec 06 10:00:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.639257473 +0000 UTC m=+0.108166262 container init 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.648805243 +0000 UTC m=+0.117714012 container start 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.555331461 +0000 UTC m=+0.024240250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.651845026 +0000 UTC m=+0.120753795 container attach 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:00:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:21 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:21 compute-0 sudo[252759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywillfubeykvzbhahsandmxrlwkxjzf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015221.6759634-4102-22152580475726/AnsiballZ_edpm_container_manage.py'
Dec 06 10:00:21 compute-0 sudo[252759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]: {
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:     "1": [
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:         {
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "devices": [
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "/dev/loop3"
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             ],
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "lv_name": "ceph_lv0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "lv_size": "21470642176",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "name": "ceph_lv0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "tags": {
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.cluster_name": "ceph",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.crush_device_class": "",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.encrypted": "0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.osd_id": "1",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.type": "block",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.vdo": "0",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:                 "ceph.with_tpm": "0"
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             },
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "type": "block",
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:             "vg_name": "ceph_vg0"
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:         }
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]:     ]
Dec 06 10:00:21 compute-0 friendly_heisenberg[252625]: }
Dec 06 10:00:21 compute-0 systemd[1]: libpod-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope: Deactivated successfully.
Dec 06 10:00:21 compute-0 podman[252609]: 2025-12-06 10:00:21.98157373 +0000 UTC m=+0.450482499 container died 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 10:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c56cf2e8000dff85198a3043ef4f5bcbf536bbb1537b31c05ddbd2dfe292e1e4-merged.mount: Deactivated successfully.
Dec 06 10:00:22 compute-0 podman[252609]: 2025-12-06 10:00:22.026336087 +0000 UTC m=+0.495244856 container remove 3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:00:22 compute-0 systemd[1]: libpod-conmon-3609e27453e2ffc8816bbe84c6bbedec8d21093d79ad109f09c6244aa4034d59.scope: Deactivated successfully.
Dec 06 10:00:22 compute-0 sudo[252348]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:22 compute-0 sudo[252774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:00:22 compute-0 sudo[252774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:22 compute-0 sudo[252774]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:22 compute-0 sudo[252799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:00:22 compute-0 sudo[252799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:22 compute-0 python3[252763]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 10:00:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:22 compute-0 podman[252860]: 2025-12-06 10:00:22.430548639 +0000 UTC m=+0.050436433 container create 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 10:00:22 compute-0 podman[252860]: 2025-12-06 10:00:22.406610567 +0000 UTC m=+0.026498381 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 06 10:00:22 compute-0 python3[252763]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 kolla_start
Dec 06 10:00:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:22.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:22 compute-0 sudo[252759]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.595184394 +0000 UTC m=+0.044386748 container create 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:00:22 compute-0 systemd[1]: Started libpod-conmon-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope.
Dec 06 10:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.575661883 +0000 UTC m=+0.024864257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.682429736 +0000 UTC m=+0.131632120 container init 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.690748802 +0000 UTC m=+0.139951156 container start 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.694113655 +0000 UTC m=+0.143316029 container attach 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:00:22 compute-0 beautiful_sinoussi[252974]: 167 167
Dec 06 10:00:22 compute-0 systemd[1]: libpod-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope: Deactivated successfully.
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.699255564 +0000 UTC m=+0.148457918 container died 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 10:00:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c61306d3468cea7fd5de5147df6f81c5d90abae074f9488fa33c629623dcfce-merged.mount: Deactivated successfully.
Dec 06 10:00:22 compute-0 podman[252934]: 2025-12-06 10:00:22.744819653 +0000 UTC m=+0.194022007 container remove 34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:00:22 compute-0 systemd[1]: libpod-conmon-34cedd69c95787d6fa7afd780c6b614f9b6719db67d24286044d6e993521e62e.scope: Deactivated successfully.
Dec 06 10:00:22 compute-0 podman[253062]: 2025-12-06 10:00:22.9105736 +0000 UTC m=+0.041006256 container create ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:00:22 compute-0 systemd[1]: Started libpod-conmon-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope.
Dec 06 10:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:22 compute-0 podman[253062]: 2025-12-06 10:00:22.893566688 +0000 UTC m=+0.023999364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:23 compute-0 podman[253062]: 2025-12-06 10:00:23.007897656 +0000 UTC m=+0.138330322 container init ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:00:23 compute-0 podman[253062]: 2025-12-06 10:00:23.017168768 +0000 UTC m=+0.147601424 container start ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 10:00:23 compute-0 podman[253062]: 2025-12-06 10:00:23.020606081 +0000 UTC m=+0.151038967 container attach ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:00:23 compute-0 sudo[253147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgrwhtwrxpxvtonmynjvzgdvouxefax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015222.7752361-4126-78558571256508/AnsiballZ_stat.py'
Dec 06 10:00:23 compute-0 sudo[253147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:23 compute-0 python3.9[253149]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 10:00:23 compute-0 sudo[253147]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:23 compute-0 ceph-mon[74327]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:23 compute-0 lvm[253297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:00:23 compute-0 lvm[253297]: VG ceph_vg0 finished
Dec 06 10:00:23 compute-0 hungry_joliot[253115]: {}
Dec 06 10:00:23 compute-0 systemd[1]: libpod-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Deactivated successfully.
Dec 06 10:00:23 compute-0 systemd[1]: libpod-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Consumed 1.265s CPU time.
Dec 06 10:00:23 compute-0 podman[253062]: 2025-12-06 10:00:23.805392189 +0000 UTC m=+0.935824845 container died ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 06 10:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d484470aad6b99a819a4cc1f01062da7b0a591936e15820e836a22f185af547c-merged.mount: Deactivated successfully.
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:00:23
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:00:23 compute-0 podman[253062]: 2025-12-06 10:00:23.85099686 +0000 UTC m=+0.981429516 container remove ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:00:23 compute-0 systemd[1]: libpod-conmon-ac25ebbdf5a5b330b692d72be83ba9a4559cd49ea86caf115927f5c363b4fb84.scope: Deactivated successfully.
Dec 06 10:00:23 compute-0 sudo[252799]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:00:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:00:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:00:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:23 compute-0 sudo[253406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdsvytlyhvpvgqtcxaxpceqhjldwwnuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015223.6273577-4153-109600545676632/AnsiballZ_file.py'
Dec 06 10:00:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:23 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:23 compute-0 sudo[253406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:23 compute-0 podman[253329]: 2025-12-06 10:00:23.975393182 +0000 UTC m=+0.130871690 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:24 compute-0 sudo[253415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:00:24 compute-0 sudo[253415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:24 compute-0 sudo[253415]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:24 compute-0 python3.9[253413]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 10:00:24 compute-0 sudo[253406]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:00:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:00:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:24 compute-0 sudo[253588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhzuchnzygtdouuhiuliuyvyimcjacty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015224.2108967-4153-162476875988314/AnsiballZ_copy.py'
Dec 06 10:00:24 compute-0 sudo[253588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:24 compute-0 python3.9[253590]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015224.2108967-4153-162476875988314/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 10:00:24 compute-0 sudo[253588]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:00:24 compute-0 ceph-mon[74327]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:00:24 compute-0 sudo[253664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwszpynuwxgpsgrulegllznartyodaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015224.2108967-4153-162476875988314/AnsiballZ_systemd.py'
Dec 06 10:00:24 compute-0 sudo[253664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:25 compute-0 python3.9[253666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 10:00:25 compute-0 systemd[1]: Reloading.
Dec 06 10:00:25 compute-0 systemd-rc-local-generator[253695]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:00:25 compute-0 systemd-sysv-generator[253699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:00:25 compute-0 sudo[253664]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:25 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:25 compute-0 sudo[253777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwyruhhnapuigumcxgwzijrmzcqtbxhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015224.2108967-4153-162476875988314/AnsiballZ_systemd.py'
Dec 06 10:00:25 compute-0 sudo[253777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:26 compute-0 python3.9[253779]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 10:00:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:26 compute-0 systemd[1]: Reloading.
Dec 06 10:00:26 compute-0 systemd-sysv-generator[253812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:00:26 compute-0 systemd-rc-local-generator[253807]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:00:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:26.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:26 compute-0 systemd[1]: Starting nova_compute container...
Dec 06 10:00:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:26 compute-0 podman[253819]: 2025-12-06 10:00:26.782612279 +0000 UTC m=+0.096025251 container init 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:00:26 compute-0 podman[253819]: 2025-12-06 10:00:26.789457276 +0000 UTC m=+0.102870238 container start 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 06 10:00:26 compute-0 podman[253819]: nova_compute
Dec 06 10:00:26 compute-0 nova_compute[253834]: + sudo -E kolla_set_configs
Dec 06 10:00:26 compute-0 systemd[1]: Started nova_compute container.
Dec 06 10:00:26 compute-0 sudo[253777]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Validating config file
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying service configuration files
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Deleting /etc/ceph
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Creating directory /etc/ceph
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Writing out command to execute
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:26 compute-0 nova_compute[253834]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 10:00:26 compute-0 nova_compute[253834]: ++ cat /run_command
Dec 06 10:00:26 compute-0 nova_compute[253834]: + CMD=nova-compute
Dec 06 10:00:26 compute-0 nova_compute[253834]: + ARGS=
Dec 06 10:00:26 compute-0 nova_compute[253834]: + sudo kolla_copy_cacerts
Dec 06 10:00:26 compute-0 nova_compute[253834]: + [[ ! -n '' ]]
Dec 06 10:00:26 compute-0 nova_compute[253834]: + . kolla_extend_start
Dec 06 10:00:26 compute-0 nova_compute[253834]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 10:00:26 compute-0 nova_compute[253834]: Running command: 'nova-compute'
Dec 06 10:00:26 compute-0 nova_compute[253834]: + umask 0022
Dec 06 10:00:26 compute-0 nova_compute[253834]: + exec nova-compute
Dec 06 10:00:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:27.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:00:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:27.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:00:27 compute-0 ceph-mon[74327]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:27 compute-0 python3.9[253998]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 10:00:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:27 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:28.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:28 compute-0 python3.9[254149]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.251 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.252 253838 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 10:00:29 compute-0 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.632 253838 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.642588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229643090, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 251, "total_data_size": 2172867, "memory_usage": 2216872, "flush_reason": "Manual Compaction"}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.653 253838 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:29 compute-0 nova_compute[253834]: 2025-12-06 10:00:29.654 253838 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229658554, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2113460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18950, "largest_seqno": 20101, "table_properties": {"data_size": 2107863, "index_size": 2989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11905, "raw_average_key_size": 19, "raw_value_size": 2096729, "raw_average_value_size": 3506, "num_data_blocks": 132, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015120, "oldest_key_time": 1765015120, "file_creation_time": 1765015229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15604 microseconds, and 5251 cpu microseconds.
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.658616) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2113460 bytes OK
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.658641) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662378) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662404) EVENT_LOG_v1 {"time_micros": 1765015229662396, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.662438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2167701, prev total WAL file size 2167701, number of live WAL files 2.
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.663312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2063KB)], [41(13MB)]
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229663347, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 16100613, "oldest_snapshot_seqno": -1}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:29 compute-0 podman[254277]: 2025-12-06 10:00:29.722890632 +0000 UTC m=+0.057672965 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5033 keys, 13954463 bytes, temperature: kUnknown
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229810975, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13954463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13919210, "index_size": 21575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 128256, "raw_average_key_size": 25, "raw_value_size": 13826217, "raw_average_value_size": 2747, "num_data_blocks": 885, "num_entries": 5033, "num_filter_entries": 5033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.811200) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13954463 bytes
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.812838) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.0 rd, 94.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.3 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(14.2) write-amplify(6.6) OK, records in: 5553, records dropped: 520 output_compression: NoCompression
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.812855) EVENT_LOG_v1 {"time_micros": 1765015229812848, "job": 20, "event": "compaction_finished", "compaction_time_micros": 147707, "compaction_time_cpu_micros": 27919, "output_level": 6, "num_output_files": 1, "total_output_size": 13954463, "num_input_records": 5553, "num_output_records": 5033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229813399, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015229815802, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.663209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:00:29.815941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:00:29 compute-0 python3.9[254316]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 10:00:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:29 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:30 compute-0 sudo[254331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:00:30 compute-0 sudo[254331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:30 compute-0 sudo[254331]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.200 253838 INFO nova.virt.driver [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 10:00:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.329 253838 INFO nova.compute.provider_config [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.374 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.375 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.376 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.377 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.378 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.379 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.380 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.381 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.382 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.383 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.384 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.385 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.386 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.387 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.388 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.389 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.390 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.391 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.392 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.393 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.394 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.395 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.396 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.397 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.398 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.399 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.400 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.401 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.402 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.403 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.404 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.405 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.406 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.407 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.408 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.409 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.410 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.411 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.412 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.413 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.414 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.415 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.416 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.417 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.418 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.419 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.420 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.421 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.422 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.423 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.424 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.425 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.426 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.427 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.428 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.429 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.430 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.431 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.432 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.433 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.434 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.435 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.436 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.437 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.438 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.439 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.440 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.441 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.442 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.443 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.444 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.445 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.446 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.447 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.448 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.449 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.450 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.451 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.452 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.453 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.454 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.455 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.456 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 WARNING oslo_config.cfg [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 10:00:30 compute-0 nova_compute[253834]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 10:00:30 compute-0 nova_compute[253834]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 10:00:30 compute-0 nova_compute[253834]: and ``live_migration_inbound_addr`` respectively.
Dec 06 10:00:30 compute-0 nova_compute[253834]: ).  Its value may be silently ignored in the future.
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.457 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.458 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.459 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.460 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.461 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_secret_uuid        = 5ecd3f74-dade-5fc4-92ce-8950ae424258 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.462 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.463 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.464 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.465 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.466 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.467 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.468 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.469 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.470 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.471 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.472 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.473 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.474 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.475 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.476 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.477 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.478 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.479 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.480 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.481 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.482 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.483 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.484 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.485 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.486 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.487 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.488 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.489 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.490 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.491 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.492 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.493 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.494 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.495 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.496 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.497 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.498 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.499 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.500 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.501 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.502 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.503 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.504 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.505 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.506 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.507 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.508 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.509 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.510 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:30.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.511 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.512 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.513 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.514 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.515 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.516 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.517 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.518 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.519 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.520 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.521 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.522 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.523 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.524 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.525 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.526 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.527 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.528 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.529 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.530 253838 DEBUG oslo_service.service [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.531 253838 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.544 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 10:00:30 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 10:00:30 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.639 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f98b5ada460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.642 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f98b5ada460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.643 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Connection event '1' reason 'None'
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.662 253838 WARNING nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 10:00:30 compute-0 nova_compute[253834]: 2025-12-06 10:00:30.662 253838 DEBUG nova.virt.libvirt.volume.mount [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 10:00:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:30 compute-0 sudo[254549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswivjdgzsohufkuezjbyfnsxfvugmkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015230.2730072-4333-190649747272024/AnsiballZ_podman_container.py'
Dec 06 10:00:30 compute-0 sudo[254549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 10:00:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Dec 06 10:00:31 compute-0 python3.9[254551]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 10:00:31 compute-0 sudo[254549]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:31 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:00:31 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.468 253838 INFO nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]: 
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <host>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <uuid>cc5c2b35-ce1b-4acf-9906-7bdc7897f14e</uuid>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <arch>x86_64</arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model>EPYC-Rome-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <vendor>AMD</vendor>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <microcode version='16777317'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <signature family='23' model='49' stepping='0'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='x2apic'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='tsc-deadline'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='osxsave'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='hypervisor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='tsc_adjust'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='spec-ctrl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='stibp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='arch-capabilities'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='cmp_legacy'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='topoext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='virt-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='lbrv'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='tsc-scale'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='vmcb-clean'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='pause-filter'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='pfthreshold'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='svme-addr-chk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='rdctl-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='mds-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature name='pschange-mc-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <pages unit='KiB' size='4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <pages unit='KiB' size='2048'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <pages unit='KiB' size='1048576'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <power_management>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <suspend_mem/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </power_management>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <iommu support='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <migration_features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <live/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <uri_transports>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <uri_transport>tcp</uri_transport>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <uri_transport>rdma</uri_transport>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </uri_transports>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </migration_features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <topology>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <cells num='1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <cell id='0'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <memory unit='KiB'>7864320</memory>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <distances>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <sibling id='0' value='10'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           </distances>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           <cpus num='8'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:           </cpus>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         </cell>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </cells>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </topology>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <cache>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </cache>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <secmodel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model>selinux</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <doi>0</doi>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </secmodel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <secmodel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model>dac</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <doi>0</doi>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </secmodel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </host>
Dec 06 10:00:31 compute-0 nova_compute[253834]: 
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <guest>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <os_type>hvm</os_type>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <arch name='i686'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <wordsize>32</wordsize>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <domain type='qemu'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <domain type='kvm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <pae/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <nonpae/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <acpi default='on' toggle='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <apic default='on' toggle='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <cpuselection/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <deviceboot/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <disksnapshot default='on' toggle='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <externalSnapshot/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </guest>
Dec 06 10:00:31 compute-0 nova_compute[253834]: 
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <guest>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <os_type>hvm</os_type>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <arch name='x86_64'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <wordsize>64</wordsize>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <domain type='qemu'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <domain type='kvm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <acpi default='on' toggle='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <apic default='on' toggle='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <cpuselection/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <deviceboot/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <disksnapshot default='on' toggle='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <externalSnapshot/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </guest>
Dec 06 10:00:31 compute-0 nova_compute[253834]: 
Dec 06 10:00:31 compute-0 nova_compute[253834]: </capabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]: 
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.479 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.505 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 10:00:31 compute-0 nova_compute[253834]: <domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <domain>kvm</domain>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <arch>i686</arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <vcpu max='240'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <iothreads supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <os supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='firmware'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <loader supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>rom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pflash</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='readonly'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>yes</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='secure'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </loader>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </os>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='maximumMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <vendor>AMD</vendor>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='succor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='custom' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-128'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-256'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-512'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <memoryBacking supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='sourceType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>anonymous</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>memfd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </memoryBacking>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <disk supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='diskDevice'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>disk</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cdrom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>floppy</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>lun</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ide</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>fdc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>sata</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </disk>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <graphics supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vnc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egl-headless</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </graphics>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <video supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='modelType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vga</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cirrus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>none</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>bochs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ramfb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </video>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hostdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='mode'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>subsystem</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='startupPolicy'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>mandatory</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>requisite</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>optional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='subsysType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pci</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='capsType'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='pciBackend'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hostdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <rng supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>random</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </rng>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <filesystem supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='driverType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>path</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>handle</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtiofs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </filesystem>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <tpm supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-tis</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-crb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emulator</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>external</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendVersion'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>2.0</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </tpm>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <redirdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </redirdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <channel supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </channel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <crypto supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </crypto>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <interface supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>passt</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </interface>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <panic supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>isa</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>hyperv</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </panic>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <console supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>null</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dev</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pipe</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stdio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>udp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tcp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu-vdagent</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </console>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <gic supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <genid supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backup supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <async-teardown supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <ps2 supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sev supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sgx supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hyperv supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='features'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>relaxed</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vapic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>spinlocks</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vpindex</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>runtime</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>synic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stimer</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reset</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vendor_id</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>frequencies</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reenlightenment</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tlbflush</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ipi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>avic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emsr_bitmap</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>xmm_input</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hyperv>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <launchSecurity supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='sectype'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tdx</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </launchSecurity>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]: </domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.512 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 10:00:31 compute-0 nova_compute[253834]: <domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <domain>kvm</domain>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <arch>i686</arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <vcpu max='4096'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <iothreads supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <os supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='firmware'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <loader supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>rom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pflash</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='readonly'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>yes</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='secure'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </loader>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </os>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='maximumMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <vendor>AMD</vendor>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='succor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='custom' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-128'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-256'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-512'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <memoryBacking supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='sourceType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>anonymous</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>memfd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </memoryBacking>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <disk supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='diskDevice'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>disk</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cdrom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>floppy</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>lun</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>fdc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>sata</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </disk>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <graphics supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vnc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egl-headless</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </graphics>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <video supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='modelType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vga</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cirrus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>none</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>bochs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ramfb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </video>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hostdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='mode'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>subsystem</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='startupPolicy'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>mandatory</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>requisite</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>optional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='subsysType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pci</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='capsType'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='pciBackend'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hostdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <rng supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>random</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </rng>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <filesystem supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='driverType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>path</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>handle</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtiofs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </filesystem>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <tpm supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-tis</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-crb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emulator</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>external</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendVersion'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>2.0</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </tpm>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <redirdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </redirdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <channel supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </channel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <crypto supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </crypto>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <interface supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>passt</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </interface>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <panic supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>isa</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>hyperv</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </panic>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <console supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>null</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dev</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pipe</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stdio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>udp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tcp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu-vdagent</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </console>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <gic supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <genid supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backup supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <async-teardown supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <ps2 supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sev supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sgx supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hyperv supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='features'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>relaxed</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vapic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>spinlocks</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vpindex</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>runtime</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>synic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stimer</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reset</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vendor_id</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>frequencies</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reenlightenment</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tlbflush</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ipi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>avic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emsr_bitmap</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>xmm_input</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hyperv>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <launchSecurity supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='sectype'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tdx</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </launchSecurity>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]: </domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.545 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.549 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 10:00:31 compute-0 nova_compute[253834]: <domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <domain>kvm</domain>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <arch>x86_64</arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <vcpu max='240'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <iothreads supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <os supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='firmware'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <loader supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>rom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pflash</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='readonly'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>yes</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='secure'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </loader>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </os>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='maximumMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <vendor>AMD</vendor>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='succor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='custom' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-128'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-256'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-512'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <memoryBacking supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='sourceType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>anonymous</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>memfd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </memoryBacking>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <disk supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='diskDevice'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>disk</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cdrom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>floppy</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>lun</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ide</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>fdc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>sata</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </disk>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <graphics supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vnc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egl-headless</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </graphics>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <video supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='modelType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vga</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cirrus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>none</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>bochs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ramfb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </video>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hostdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='mode'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>subsystem</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='startupPolicy'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>mandatory</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>requisite</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>optional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='subsysType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pci</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='capsType'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='pciBackend'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hostdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <rng supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>random</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </rng>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <filesystem supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='driverType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>path</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>handle</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtiofs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </filesystem>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <tpm supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-tis</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-crb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emulator</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>external</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendVersion'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>2.0</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </tpm>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <redirdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </redirdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <channel supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </channel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <crypto supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </crypto>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <interface supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>passt</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </interface>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <panic supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>isa</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>hyperv</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </panic>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <console supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>null</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dev</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pipe</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stdio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>udp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tcp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu-vdagent</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </console>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <gic supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <genid supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backup supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <async-teardown supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <ps2 supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sev supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sgx supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hyperv supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='features'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>relaxed</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vapic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>spinlocks</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vpindex</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>runtime</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>synic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stimer</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reset</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vendor_id</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>frequencies</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reenlightenment</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tlbflush</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ipi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>avic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emsr_bitmap</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>xmm_input</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hyperv>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <launchSecurity supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='sectype'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tdx</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </launchSecurity>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]: </domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.611 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 10:00:31 compute-0 nova_compute[253834]: <domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <domain>kvm</domain>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <arch>x86_64</arch>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <vcpu max='4096'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <iothreads supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <os supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='firmware'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>efi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <loader supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>rom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pflash</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='readonly'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>yes</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='secure'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>yes</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>no</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </loader>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </os>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='maximumMigratable'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>on</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>off</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <vendor>AMD</vendor>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='succor'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <mode name='custom' supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Denverton-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='auto-ibrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amd-psfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='stibp-always-on'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='EPYC-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-128'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-256'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx10-512'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='prefetchiti'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 ceph-mon[74327]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell'>
Dec 06 10:00:31 compute-0 sudo[254737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yifhddibheimnatiizqnciaewbplkbcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015231.4643524-4357-256153379211862/AnsiballZ_systemd.py'
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Haswell-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 sudo[254737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512er'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512pf'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fma4'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tbm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xop'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='amx-tile'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-bf16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-fp16'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bitalg'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrc'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fzrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='la57'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='taa-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xfd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ifma'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cmpccxadd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fbsdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='fsrs'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ibrs-all'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mcdt-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pbrsb-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='psdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='serialize'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vaes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='hle'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='rtm'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512bw'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512cd'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512dq'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512f'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='avx512vl'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='invpcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pcid'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='pku'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='mpx'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='core-capability'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='split-lock-detect'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='cldemote'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='erms'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='gfni'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdir64b'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='movdiri'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='xsaves'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='athlon-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='core2duo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='coreduo-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='n270-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='ss'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <blockers model='phenom-v1'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnow'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <feature name='3dnowext'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </blockers>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </mode>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </cpu>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <memoryBacking supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <enum name='sourceType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>anonymous</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <value>memfd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </memoryBacking>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <disk supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='diskDevice'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>disk</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cdrom</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>floppy</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>lun</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>fdc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>sata</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </disk>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <graphics supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vnc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egl-headless</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </graphics>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <video supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='modelType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vga</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>cirrus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>none</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>bochs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ramfb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </video>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hostdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='mode'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>subsystem</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='startupPolicy'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>mandatory</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>requisite</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>optional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='subsysType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pci</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>scsi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='capsType'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='pciBackend'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hostdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <rng supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtio-non-transitional</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>random</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>egd</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </rng>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <filesystem supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='driverType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>path</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>handle</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>virtiofs</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </filesystem>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <tpm supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-tis</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tpm-crb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emulator</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>external</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendVersion'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>2.0</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </tpm>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <redirdev supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='bus'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>usb</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </redirdev>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <channel supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </channel>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <crypto supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendModel'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>builtin</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </crypto>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <interface supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='backendType'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>default</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>passt</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </interface>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <panic supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='model'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>isa</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>hyperv</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </panic>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <console supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='type'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>null</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vc</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pty</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dev</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>file</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>pipe</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stdio</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>udp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tcp</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>unix</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>qemu-vdagent</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>dbus</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </console>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </devices>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   <features>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <gic supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <genid supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <backup supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <async-teardown supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <ps2 supported='yes'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sev supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <sgx supported='no'/>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <hyperv supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='features'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>relaxed</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vapic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>spinlocks</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vpindex</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>runtime</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>synic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>stimer</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reset</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>vendor_id</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>frequencies</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>reenlightenment</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tlbflush</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>ipi</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>avic</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>emsr_bitmap</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>xmm_input</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </defaults>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </hyperv>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     <launchSecurity supported='yes'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       <enum name='sectype'>
Dec 06 10:00:31 compute-0 nova_compute[253834]:         <value>tdx</value>
Dec 06 10:00:31 compute-0 nova_compute[253834]:       </enum>
Dec 06 10:00:31 compute-0 nova_compute[253834]:     </launchSecurity>
Dec 06 10:00:31 compute-0 nova_compute[253834]:   </features>
Dec 06 10:00:31 compute-0 nova_compute[253834]: </domainCapabilities>
Dec 06 10:00:31 compute-0 nova_compute[253834]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.677 253838 DEBUG nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.678 253838 INFO nova.virt.libvirt.host [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Secure Boot support detected
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.679 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.680 253838 INFO nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.688 253838 DEBUG nova.virt.libvirt.driver [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.719 253838 INFO nova.virt.node [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Determined node identity 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from /var/lib/nova/compute_id
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.745 253838 WARNING nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Compute nodes ['06a9c7d1-c74c-47ea-9e97-16acfab6aa88'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.787 253838 INFO nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.826 253838 WARNING nova.compute.manager [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.827 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.827 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.828 253838 DEBUG oslo_concurrency.lockutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.828 253838 DEBUG nova.compute.resource_tracker [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:00:31 compute-0 nova_compute[253834]: 2025-12-06 10:00:31.829 253838 DEBUG oslo_concurrency.processutils [None req-34c2fe1f-5334-41ea-b1ed-db8fbf6b6e5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:31 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:32 compute-0 python3.9[254739]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 10:00:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:32 compute-0 systemd[1]: Stopping nova_compute container...
Dec 06 10:00:32 compute-0 nova_compute[253834]: 2025-12-06 10:00:32.159 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:00:32 compute-0 nova_compute[253834]: 2025-12-06 10:00:32.160 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:00:32 compute-0 nova_compute[253834]: 2025-12-06 10:00:32.160 253838 DEBUG oslo_concurrency.lockutils [None req-f83f0646-e374-4f4d-bf9e-bf59296084e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:00:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:32.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:32 compute-0 virtqemud[254445]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 06 10:00:32 compute-0 virtqemud[254445]: hostname: compute-0
Dec 06 10:00:32 compute-0 virtqemud[254445]: End of file while reading data: Input/output error
Dec 06 10:00:32 compute-0 systemd[1]: libpod-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4.scope: Deactivated successfully.
Dec 06 10:00:32 compute-0 systemd[1]: libpod-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4.scope: Consumed 3.784s CPU time.
Dec 06 10:00:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:32 compute-0 podman[254763]: 2025-12-06 10:00:32.705391524 +0000 UTC m=+0.601242430 container died 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3)
Dec 06 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1-merged.mount: Deactivated successfully.
Dec 06 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4-userdata-shm.mount: Deactivated successfully.
Dec 06 10:00:33 compute-0 podman[254763]: 2025-12-06 10:00:33.197688868 +0000 UTC m=+1.093539774 container cleanup 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:00:33 compute-0 podman[254763]: nova_compute
Dec 06 10:00:33 compute-0 podman[254791]: nova_compute
Dec 06 10:00:33 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 06 10:00:33 compute-0 systemd[1]: Stopped nova_compute container.
Dec 06 10:00:33 compute-0 systemd[1]: Starting nova_compute container...
Dec 06 10:00:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6cd297d4d5e03b6ecf69eff4e5568648c8e7cf0535bacb2e02453ba51d963b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:33 compute-0 podman[254804]: 2025-12-06 10:00:33.408112329 +0000 UTC m=+0.107424395 container init 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute)
Dec 06 10:00:33 compute-0 podman[254804]: 2025-12-06 10:00:33.417491141 +0000 UTC m=+0.116803187 container start 61186ed8c634307cf0309e3bca9d5df1e0856e135e8553b861cf702ecb9431f4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:00:33 compute-0 nova_compute[254819]: + sudo -E kolla_set_configs
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Validating config file
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying service configuration files
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /etc/ceph
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Creating directory /etc/ceph
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Writing out command to execute
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:33 compute-0 nova_compute[254819]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 10:00:33 compute-0 nova_compute[254819]: ++ cat /run_command
Dec 06 10:00:33 compute-0 nova_compute[254819]: + CMD=nova-compute
Dec 06 10:00:33 compute-0 nova_compute[254819]: + ARGS=
Dec 06 10:00:33 compute-0 nova_compute[254819]: + sudo kolla_copy_cacerts
Dec 06 10:00:33 compute-0 nova_compute[254819]: + [[ ! -n '' ]]
Dec 06 10:00:33 compute-0 nova_compute[254819]: + . kolla_extend_start
Dec 06 10:00:33 compute-0 nova_compute[254819]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 10:00:33 compute-0 nova_compute[254819]: Running command: 'nova-compute'
Dec 06 10:00:33 compute-0 nova_compute[254819]: + umask 0022
Dec 06 10:00:33 compute-0 nova_compute[254819]: + exec nova-compute
Dec 06 10:00:33 compute-0 podman[254804]: nova_compute
Dec 06 10:00:33 compute-0 systemd[1]: Started nova_compute container.
Dec 06 10:00:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:33 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:33 compute-0 sudo[254737]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:34 compute-0 ceph-mon[74327]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:34 compute-0 sudo[254982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-medduztmodkaygflopuokbnwpsfytwvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015234.5029988-4384-28079212653252/AnsiballZ_podman_container.py'
Dec 06 10:00:34 compute-0 sudo[254982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:00:35 compute-0 python3.9[254984]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 10:00:35 compute-0 ceph-mon[74327]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:35 compute-0 systemd[1]: Started libpod-conmon-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope.
Dec 06 10:00:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 10:00:35 compute-0 podman[255010]: 2025-12-06 10:00:35.271307298 +0000 UTC m=+0.123952367 container init 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:00:35 compute-0 podman[255010]: 2025-12-06 10:00:35.279774395 +0000 UTC m=+0.132419454 container start 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3)
Dec 06 10:00:35 compute-0 python3.9[254984]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Applying nova statedir ownership
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 06 10:00:35 compute-0 nova_compute_init[255032]: INFO:nova_statedir:Nova statedir ownership complete
Dec 06 10:00:35 compute-0 systemd[1]: libpod-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope: Deactivated successfully.
Dec 06 10:00:35 compute-0 podman[255046]: 2025-12-06 10:00:35.378545209 +0000 UTC m=+0.029824620 container died 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93-userdata-shm.mount: Deactivated successfully.
Dec 06 10:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf38a67268bb5c778ee22b82a67e967500166ebf66af340febcfb15bfceb4b28-merged.mount: Deactivated successfully.
Dec 06 10:00:35 compute-0 podman[255046]: 2025-12-06 10:00:35.41599977 +0000 UTC m=+0.067279171 container cleanup 60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 06 10:00:35 compute-0 systemd[1]: libpod-conmon-60c8ec5cf17302d0f66429fac7cab04e2b9619653bb835479ed1ce484891ed93.scope: Deactivated successfully.
Dec 06 10:00:35 compute-0 sudo[254982]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.542 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.543 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.544 254824 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.544 254824 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.719 254824 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.735 254824 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:35 compute-0 nova_compute[254819]: 2025-12-06 10:00:35.736 254824 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 10:00:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:35 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:36 compute-0 sshd-session[223939]: Connection closed by 192.168.122.30 port 50514
Dec 06 10:00:36 compute-0 sshd-session[223936]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:00:36 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 06 10:00:36 compute-0 systemd[1]: session-54.scope: Consumed 2min 34.573s CPU time.
Dec 06 10:00:36 compute-0 systemd-logind[795]: Session 54 logged out. Waiting for processes to exit.
Dec 06 10:00:36 compute-0 systemd-logind[795]: Removed session 54.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.193 254824 INFO nova.virt.driver [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 10:00:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.307 254824 INFO nova.compute.provider_config [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.316 254824 DEBUG oslo_concurrency.lockutils [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.317 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.318 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.319 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.320 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.321 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.322 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.323 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.324 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.325 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.326 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.327 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.328 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.329 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.330 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.331 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.332 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.333 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.334 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.335 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.336 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.337 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.338 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.339 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.340 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.341 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.342 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.343 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.344 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.345 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.346 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.347 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.348 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.349 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.350 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.351 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.352 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.353 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.354 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.355 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.356 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.357 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.358 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.359 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.360 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.361 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.362 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.363 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.364 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.365 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.366 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.367 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.368 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.369 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.370 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.371 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.372 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.373 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.374 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.375 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.376 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.377 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.378 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.379 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.380 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.381 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.382 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.383 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.384 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.385 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.386 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.387 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.388 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.389 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.390 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.391 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.392 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.393 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.394 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.395 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.396 254824 WARNING oslo_config.cfg [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 10:00:36 compute-0 nova_compute[254819]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 10:00:36 compute-0 nova_compute[254819]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 10:00:36 compute-0 nova_compute[254819]: and ``live_migration_inbound_addr`` respectively.
Dec 06 10:00:36 compute-0 nova_compute[254819]: ).  Its value may be silently ignored in the future.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.397 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.398 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.399 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_secret_uuid        = 5ecd3f74-dade-5fc4-92ce-8950ae424258 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.400 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.401 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.402 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.403 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.404 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.405 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.406 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.407 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.408 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.409 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.410 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.411 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.412 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.413 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.414 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.415 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.416 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.417 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.418 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.419 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.420 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.421 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.422 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.423 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.424 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.425 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.426 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.427 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.428 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.429 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.430 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.431 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.432 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.433 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.434 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.435 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.436 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.437 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.438 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.439 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.440 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.441 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.442 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.443 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.444 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.445 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.446 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.447 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.448 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.449 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.450 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.451 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.452 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.453 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.454 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.455 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.456 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.457 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.458 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.459 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.460 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.461 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:36.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.462 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.463 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.464 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.465 254824 DEBUG oslo_service.service [None req-25bda6dc-ce0c-4c56-9101-114bd2dec329 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.466 254824 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.486 254824 INFO nova.virt.node [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Determined node identity 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from /var/lib/nova/compute_id
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.487 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.488 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.503 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f223c536760> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.505 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f223c536760> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.506 254824 INFO nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Connection event '1' reason 'None'
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.513 254824 INFO nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]: 
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <host>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <uuid>cc5c2b35-ce1b-4acf-9906-7bdc7897f14e</uuid>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <arch>x86_64</arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model>EPYC-Rome-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <vendor>AMD</vendor>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <microcode version='16777317'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <signature family='23' model='49' stepping='0'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='x2apic'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='tsc-deadline'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='osxsave'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='hypervisor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='tsc_adjust'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='spec-ctrl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='stibp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='arch-capabilities'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='cmp_legacy'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='topoext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='virt-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='lbrv'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='tsc-scale'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='vmcb-clean'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='pause-filter'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='pfthreshold'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='svme-addr-chk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='rdctl-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='mds-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature name='pschange-mc-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <pages unit='KiB' size='4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <pages unit='KiB' size='2048'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <pages unit='KiB' size='1048576'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <power_management>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <suspend_mem/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </power_management>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <iommu support='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <migration_features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <live/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <uri_transports>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <uri_transport>tcp</uri_transport>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <uri_transport>rdma</uri_transport>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </uri_transports>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </migration_features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <topology>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <cells num='1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <cell id='0'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <memory unit='KiB'>7864320</memory>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <distances>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <sibling id='0' value='10'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           </distances>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           <cpus num='8'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:           </cpus>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         </cell>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </cells>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </topology>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <cache>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </cache>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <secmodel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model>selinux</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <doi>0</doi>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </secmodel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <secmodel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model>dac</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <doi>0</doi>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </secmodel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </host>
Dec 06 10:00:36 compute-0 nova_compute[254819]: 
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <guest>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <os_type>hvm</os_type>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <arch name='i686'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <wordsize>32</wordsize>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <domain type='qemu'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <domain type='kvm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <pae/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <nonpae/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <acpi default='on' toggle='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <apic default='on' toggle='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <cpuselection/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <deviceboot/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <disksnapshot default='on' toggle='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <externalSnapshot/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </guest>
Dec 06 10:00:36 compute-0 nova_compute[254819]: 
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <guest>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <os_type>hvm</os_type>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <arch name='x86_64'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <wordsize>64</wordsize>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <domain type='qemu'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <domain type='kvm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <acpi default='on' toggle='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <apic default='on' toggle='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <cpuselection/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <deviceboot/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <disksnapshot default='on' toggle='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <externalSnapshot/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </guest>
Dec 06 10:00:36 compute-0 nova_compute[254819]: 
Dec 06 10:00:36 compute-0 nova_compute[254819]: </capabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]: 
Dec 06 10:00:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.518 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.522 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 10:00:36 compute-0 nova_compute[254819]: <domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <domain>kvm</domain>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <arch>i686</arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <vcpu max='240'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <iothreads supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <os supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='firmware'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <loader supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>rom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pflash</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='readonly'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>yes</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='secure'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </loader>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </os>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='maximumMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <vendor>AMD</vendor>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='succor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='custom' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-128'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-256'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-512'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <memoryBacking supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='sourceType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>anonymous</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>memfd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </memoryBacking>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <disk supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='diskDevice'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>disk</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cdrom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>floppy</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>lun</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ide</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>fdc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>sata</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <graphics supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vnc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egl-headless</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <video supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='modelType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vga</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cirrus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>none</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>bochs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ramfb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </video>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hostdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='mode'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>subsystem</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='startupPolicy'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>mandatory</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>requisite</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>optional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='subsysType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pci</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='capsType'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='pciBackend'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hostdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <rng supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>random</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <filesystem supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='driverType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>path</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>handle</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtiofs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </filesystem>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <tpm supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-tis</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-crb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emulator</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>external</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendVersion'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>2.0</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </tpm>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <redirdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </redirdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <channel supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </channel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <crypto supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </crypto>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <interface supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>passt</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <panic supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>isa</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>hyperv</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </panic>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <console supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>null</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dev</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pipe</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stdio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>udp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tcp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu-vdagent</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </console>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <gic supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <genid supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backup supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <async-teardown supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <ps2 supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sev supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sgx supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hyperv supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='features'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>relaxed</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vapic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>spinlocks</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vpindex</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>runtime</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>synic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stimer</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reset</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vendor_id</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>frequencies</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reenlightenment</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tlbflush</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ipi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>avic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emsr_bitmap</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>xmm_input</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hyperv>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <launchSecurity supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='sectype'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tdx</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </launchSecurity>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]: </domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.526 254824 DEBUG nova.virt.libvirt.volume.mount [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.529 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 10:00:36 compute-0 nova_compute[254819]: <domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <domain>kvm</domain>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <arch>i686</arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <vcpu max='4096'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <iothreads supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <os supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='firmware'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <loader supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>rom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pflash</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='readonly'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>yes</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='secure'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </loader>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </os>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='maximumMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <vendor>AMD</vendor>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='succor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='custom' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-128'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-256'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-512'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <memoryBacking supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='sourceType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>anonymous</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>memfd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </memoryBacking>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <disk supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='diskDevice'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>disk</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cdrom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>floppy</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>lun</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>fdc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>sata</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <graphics supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vnc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egl-headless</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <video supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='modelType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vga</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cirrus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>none</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>bochs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ramfb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </video>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hostdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='mode'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>subsystem</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='startupPolicy'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>mandatory</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>requisite</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>optional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='subsysType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pci</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='capsType'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='pciBackend'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hostdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <rng supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>random</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <filesystem supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='driverType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>path</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>handle</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtiofs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </filesystem>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <tpm supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-tis</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-crb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emulator</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>external</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendVersion'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>2.0</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </tpm>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <redirdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </redirdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <channel supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </channel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <crypto supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </crypto>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <interface supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>passt</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <panic supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>isa</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>hyperv</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </panic>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <console supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>null</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dev</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pipe</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stdio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>udp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tcp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu-vdagent</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </console>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <gic supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <genid supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backup supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <async-teardown supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <ps2 supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sev supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sgx supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hyperv supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='features'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>relaxed</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vapic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>spinlocks</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vpindex</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>runtime</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>synic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stimer</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reset</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vendor_id</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>frequencies</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reenlightenment</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tlbflush</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ipi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>avic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emsr_bitmap</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>xmm_input</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hyperv>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <launchSecurity supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='sectype'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tdx</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </launchSecurity>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]: </domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.563 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.566 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 10:00:36 compute-0 nova_compute[254819]: <domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <domain>kvm</domain>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <arch>x86_64</arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <vcpu max='240'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <iothreads supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <os supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='firmware'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <loader supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>rom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pflash</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='readonly'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>yes</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='secure'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </loader>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </os>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='maximumMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <vendor>AMD</vendor>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='succor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='custom' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-128'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-256'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-512'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <memoryBacking supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='sourceType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>anonymous</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>memfd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </memoryBacking>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <disk supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='diskDevice'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>disk</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cdrom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>floppy</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>lun</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ide</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>fdc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>sata</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <graphics supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vnc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egl-headless</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <video supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='modelType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vga</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cirrus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>none</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>bochs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ramfb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </video>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hostdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='mode'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>subsystem</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='startupPolicy'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>mandatory</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>requisite</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>optional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='subsysType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pci</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='capsType'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='pciBackend'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hostdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <rng supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>random</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <filesystem supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='driverType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>path</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>handle</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtiofs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </filesystem>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <tpm supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-tis</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-crb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emulator</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>external</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendVersion'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>2.0</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </tpm>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <redirdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </redirdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <channel supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </channel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <crypto supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </crypto>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <interface supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>passt</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <panic supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>isa</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>hyperv</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </panic>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <console supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>null</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dev</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pipe</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stdio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>udp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tcp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu-vdagent</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </console>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <gic supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <genid supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backup supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <async-teardown supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <ps2 supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sev supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sgx supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hyperv supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='features'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>relaxed</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vapic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>spinlocks</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vpindex</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>runtime</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>synic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stimer</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reset</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vendor_id</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>frequencies</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reenlightenment</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tlbflush</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ipi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>avic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emsr_bitmap</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>xmm_input</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hyperv>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <launchSecurity supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='sectype'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tdx</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </launchSecurity>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]: </domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.637 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 10:00:36 compute-0 nova_compute[254819]: <domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <domain>kvm</domain>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <arch>x86_64</arch>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <vcpu max='4096'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <iothreads supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <os supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='firmware'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>efi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <loader supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>rom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pflash</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='readonly'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>yes</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='secure'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>yes</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>no</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </loader>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </os>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-passthrough' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='hostPassthroughMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='maximum' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='maximumMigratable'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>on</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>off</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='host-model' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <vendor>AMD</vendor>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='x2apic'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='hypervisor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='stibp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='overflow-recov'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='succor'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lbrv'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='tsc-scale'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='flushbyasid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pause-filter'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='pfthreshold'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <feature policy='disable' name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <mode name='custom' supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Broadwell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Cooperlake-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Denverton-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Dhyana-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='auto-ibrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Milan-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amd-psfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='no-nested-data-bp'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='null-sel-clr-base'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='stibp-always-on'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-Rome-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='EPYC-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='GraniteRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-128'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-256'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx10-512'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='prefetchiti'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Haswell-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v6'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Icelake-Server-v7'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='IvyBridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='KnightsMill-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4fmaps'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-4vnniw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512er'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512pf'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G4-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Opteron_G5-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fma4'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tbm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xop'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SapphireRapids-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='amx-tile'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-bf16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-fp16'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512-vpopcntdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bitalg'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vbmi2'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrc'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fzrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='la57'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='taa-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='tsx-ldtrk'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xfd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='SierraForest-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ifma'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-ne-convert'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx-vnni-int8'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='bus-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cmpccxadd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fbsdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='fsrs'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ibrs-all'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mcdt-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pbrsb-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='psdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='sbdr-ssdp-no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='serialize'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vaes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='vpclmulqdq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Client-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='hle'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='rtm'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Skylake-Server-v5'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512bw'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512cd'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512dq'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512f'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='avx512vl'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='invpcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pcid'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='pku'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='mpx'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v2'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v3'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='core-capability'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='split-lock-detect'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='Snowridge-v4'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='cldemote'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='erms'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='gfni'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdir64b'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='movdiri'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='xsaves'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='athlon-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='core2duo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='coreduo-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='n270-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='ss'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <blockers model='phenom-v1'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnow'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <feature name='3dnowext'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </blockers>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </mode>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <memoryBacking supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <enum name='sourceType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>anonymous</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <value>memfd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </memoryBacking>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <disk supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='diskDevice'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>disk</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cdrom</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>floppy</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>lun</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>fdc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>sata</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <graphics supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vnc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egl-headless</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <video supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='modelType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vga</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>cirrus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>none</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>bochs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ramfb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </video>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hostdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='mode'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>subsystem</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='startupPolicy'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>mandatory</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>requisite</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>optional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='subsysType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pci</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>scsi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='capsType'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='pciBackend'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hostdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <rng supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtio-non-transitional</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>random</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>egd</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <filesystem supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='driverType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>path</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>handle</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>virtiofs</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </filesystem>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <tpm supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-tis</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tpm-crb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emulator</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>external</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendVersion'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>2.0</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </tpm>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <redirdev supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='bus'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>usb</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </redirdev>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <channel supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </channel>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <crypto supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendModel'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>builtin</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </crypto>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <interface supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='backendType'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>default</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>passt</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <panic supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='model'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>isa</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>hyperv</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </panic>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <console supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='type'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>null</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vc</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pty</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dev</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>file</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>pipe</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stdio</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>udp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tcp</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>unix</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>qemu-vdagent</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>dbus</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </console>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   <features>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <gic supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <vmcoreinfo supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <genid supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backingStoreInput supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <backup supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <async-teardown supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <ps2 supported='yes'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sev supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <sgx supported='no'/>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <hyperv supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='features'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>relaxed</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vapic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>spinlocks</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vpindex</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>runtime</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>synic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>stimer</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reset</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>vendor_id</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>frequencies</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>reenlightenment</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tlbflush</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>ipi</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>avic</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>emsr_bitmap</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>xmm_input</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <spinlocks>4095</spinlocks>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <stimer_direct>on</stimer_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </defaults>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </hyperv>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     <launchSecurity supported='yes'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       <enum name='sectype'>
Dec 06 10:00:36 compute-0 nova_compute[254819]:         <value>tdx</value>
Dec 06 10:00:36 compute-0 nova_compute[254819]:       </enum>
Dec 06 10:00:36 compute-0 nova_compute[254819]:     </launchSecurity>
Dec 06 10:00:36 compute-0 nova_compute[254819]:   </features>
Dec 06 10:00:36 compute-0 nova_compute[254819]: </domainCapabilities>
Dec 06 10:00:36 compute-0 nova_compute[254819]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.716 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.716 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.717 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.717 254824 INFO nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Secure Boot support detected
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.719 254824 INFO nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.719 254824 INFO nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.729 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.749 254824 INFO nova.virt.node [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Determined node identity 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from /var/lib/nova/compute_id
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.765 254824 WARNING nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Compute nodes ['06a9c7d1-c74c-47ea-9e97-16acfab6aa88'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.790 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.807 254824 WARNING nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.808 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.808 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.808 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.809 254824 DEBUG nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:00:36 compute-0 nova_compute[254819]: 2025-12-06 10:00:36.809 254824 DEBUG oslo_concurrency.processutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:36 compute-0 rsyslogd[1004]: imjournal from <np0005548915:nova_compute>: begin to drop messages due to rate-limiting
Dec 06 10:00:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:37.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:00:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:37.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:00:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:37.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:00:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:00:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794266355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.238 254824 DEBUG oslo_concurrency.processutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:37 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 10:00:37 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 06 10:00:37 compute-0 ceph-mon[74327]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1794266355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.515 254824 WARNING nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.516 254824 DEBUG nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.516 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.517 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.535 254824 WARNING nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] No compute node record for compute-0.ctlplane.example.com:06a9c7d1-c74c-47ea-9e97-16acfab6aa88: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 could not be found.
Dec 06 10:00:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.562 254824 INFO nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.662 254824 DEBUG nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:00:37 compute-0 nova_compute[254819]: 2025-12-06 10:00:37.662 254824 DEBUG nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:00:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:37 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240010f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1675628201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/656373637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:38 compute-0 nova_compute[254819]: 2025-12-06 10:00:38.514 254824 INFO nova.scheduler.client.report [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [req-0dc7ae3f-7e35-4d01-bc9e-78a4c4890972] Created resource provider record via placement API for resource provider with UUID 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 and name compute-0.ctlplane.example.com.
Dec 06 10:00:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:38 compute-0 nova_compute[254819]: 2025-12-06 10:00:38.557 254824 DEBUG oslo_concurrency.processutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:00:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:00:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2444252692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:38 compute-0 nova_compute[254819]: 2025-12-06 10:00:38.996 254824 DEBUG oslo_concurrency.processutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.002 254824 DEBUG nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 10:00:39 compute-0 nova_compute[254819]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.002 254824 INFO nova.virt.libvirt.host [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] kernel doesn't support AMD SEV
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.004 254824 DEBUG nova.compute.provider_tree [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.005 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.093 254824 DEBUG nova.scheduler.client.report [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Updated inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.094 254824 DEBUG nova.compute.provider_tree [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Updating resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.094 254824 DEBUG nova.compute.provider_tree [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.189 254824 DEBUG nova.compute.provider_tree [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Updating resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.220 254824 DEBUG nova.compute.resource_tracker [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.221 254824 DEBUG oslo_concurrency.lockutils [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.221 254824 DEBUG nova.service [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.315 254824 DEBUG nova.service [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 10:00:39 compute-0 nova_compute[254819]: 2025-12-06 10:00:39.316 254824 DEBUG nova.servicegroup.drivers.db [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 10:00:39 compute-0 ceph-mon[74327]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2444252692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3822391769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:39 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240010f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4184093383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:00:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:40.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:41 compute-0 ceph-mon[74327]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:41 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:42.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240010f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:43 compute-0 ceph-mon[74327]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:43 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:44.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:00:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:00:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:45 compute-0 ceph-mon[74327]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724001290 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:46.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:46.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:47.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:00:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:47 compute-0 ceph-mon[74327]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:47 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240012b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:48 compute-0 podman[255200]: 2025-12-06 10:00:48.433460947 +0000 UTC m=+0.065638609 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:00:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:48.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:49 compute-0 ceph-mon[74327]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:49 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:50 compute-0 sudo[255222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:00:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:50 compute-0 sudo[255222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:00:50 compute-0 sudo[255222]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:50.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:00:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:00:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:00:51 compute-0 ceph-mon[74327]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:53 compute-0 ceph-mon[74327]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:00:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:53 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:00:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:00:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:00:54.233 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:00:54.234 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:00:54.234 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:54 compute-0 podman[255251]: 2025-12-06 10:00:54.501337903 +0000 UTC m=+0.130814622 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:00:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:00:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:55 compute-0 ceph-mon[74327]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:00:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:55 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:00:57.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:00:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:00:57 compute-0 ceph-mon[74327]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:00:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:57 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:00:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:00:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:00:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:00:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:00:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:00:59 compute-0 ceph-mon[74327]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:00:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:00:59 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754002270 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:00 compute-0 podman[255283]: 2025-12-06 10:01:00.448220442 +0000 UTC m=+0.065391121 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:01:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:00.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:01:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:01:01 compute-0 CROND[255305]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 10:01:01 compute-0 run-parts[255308]: (/etc/cron.hourly) starting 0anacron
Dec 06 10:01:01 compute-0 run-parts[255314]: (/etc/cron.hourly) finished 0anacron
Dec 06 10:01:01 compute-0 CROND[255304]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 10:01:01 compute-0 ceph-mon[74327]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:02.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:02 compute-0 ceph-mon[74327]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:03 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:04.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:04.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:01:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133958809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:01:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:01:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133958809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:01:05 compute-0 ceph-mon[74327]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3133958809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:01:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:05 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3133958809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:01:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4193963036' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:01:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4193963036' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:01:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:06.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:06.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:01:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/162190854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:01:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:01:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/162190854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:01:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:07.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:07 compute-0 ceph-mon[74327]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:07 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/162190854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:01:07 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/162190854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:01:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:08.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:01:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:09 compute-0 ceph-mon[74327]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:10 compute-0 sudo[255324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:01:10 compute-0 sudo[255324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:10 compute-0 sudo[255324]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:01:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:01:11 compute-0 ceph-mon[74327]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:13 compute-0 ceph-mon[74327]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:15 compute-0 ceph-mon[74327]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:16.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:16 compute-0 ceph-mon[74327]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:17.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:01:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:17.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:18.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:19 compute-0 ceph-mon[74327]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:19 compute-0 podman[255358]: 2025-12-06 10:01:19.46234884 +0000 UTC m=+0.082983732 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:01:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:01:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:01:21 compute-0 ceph-mon[74327]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240045f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:23 compute-0 ceph-mon[74327]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:01:23
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', '.nfs']
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:01:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:01:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:24 compute-0 sudo[255384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:24 compute-0 sudo[255384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:24 compute-0 sudo[255384]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:24 compute-0 sudo[255409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:01:24 compute-0 sudo[255409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:01:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:01:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:01:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:24.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:01:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:24 compute-0 sudo[255409]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:01:25 compute-0 sudo[255465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:25 compute-0 sudo[255465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:25 compute-0 sudo[255465]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:25 compute-0 sudo[255497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:01:25 compute-0 sudo[255497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:25 compute-0 podman[255490]: 2025-12-06 10:01:25.247387638 +0000 UTC m=+0.088297373 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:01:25 compute-0 nova_compute[254819]: 2025-12-06 10:01:25.318 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:25 compute-0 nova_compute[254819]: 2025-12-06 10:01:25.343 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.662745363 +0000 UTC m=+0.072691406 container create 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:01:25 compute-0 ceph-mon[74327]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:01:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:01:25 compute-0 systemd[1]: Started libpod-conmon-6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d.scope.
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.619170728 +0000 UTC m=+0.029116791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.750632466 +0000 UTC m=+0.160578539 container init 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.758532747 +0000 UTC m=+0.168478780 container start 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.761786194 +0000 UTC m=+0.171732237 container attach 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:25 compute-0 trusting_shirley[255600]: 167 167
Dec 06 10:01:25 compute-0 systemd[1]: libpod-6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d.scope: Deactivated successfully.
Dec 06 10:01:25 compute-0 conmon[255600]: conmon 6ff48dbc82eae136bd25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d.scope/container/memory.events
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.768468903 +0000 UTC m=+0.178414986 container died 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7598a8e3dfe751256f024f5bbe8932ed52798f97660fa528ca86a39bd3d45dc-merged.mount: Deactivated successfully.
Dec 06 10:01:25 compute-0 podman[255584]: 2025-12-06 10:01:25.814982948 +0000 UTC m=+0.224928991 container remove 6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shirley, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:01:25 compute-0 systemd[1]: libpod-conmon-6ff48dbc82eae136bd255e621059abbfd05ef6f62ffd696fb80fc6825e77c18d.scope: Deactivated successfully.
Dec 06 10:01:25 compute-0 podman[255624]: 2025-12-06 10:01:25.983541858 +0000 UTC m=+0.051464589 container create e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:26 compute-0 systemd[1]: Started libpod-conmon-e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84.scope.
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:25.96079142 +0000 UTC m=+0.028714151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:26.077809521 +0000 UTC m=+0.145732262 container init e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:26.087400037 +0000 UTC m=+0.155322778 container start e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:26.091705232 +0000 UTC m=+0.159627963 container attach e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:01:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:26 compute-0 modest_faraday[255641]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:01:26 compute-0 modest_faraday[255641]: --> All data devices are unavailable
Dec 06 10:01:26 compute-0 systemd[1]: libpod-e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84.scope: Deactivated successfully.
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:26.431311751 +0000 UTC m=+0.499234492 container died e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7118e06460b971de825fcfd8e96b1f5faac14ff146e0d217390de46c25fad25b-merged.mount: Deactivated successfully.
Dec 06 10:01:26 compute-0 podman[255624]: 2025-12-06 10:01:26.484639298 +0000 UTC m=+0.552561999 container remove e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:01:26 compute-0 systemd[1]: libpod-conmon-e5d0913c1ca135d4e2ebf89c48c100e8de1f6af36a6fe11687d6d6a67f08cc84.scope: Deactivated successfully.
Dec 06 10:01:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:26 compute-0 sudo[255497]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:26 compute-0 sudo[255668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:26 compute-0 sudo[255668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:26 compute-0 sudo[255668]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:26 compute-0 sudo[255693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:01:26 compute-0 sudo[255693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.060311552 +0000 UTC m=+0.041491451 container create ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:01:27 compute-0 systemd[1]: Started libpod-conmon-ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9.scope.
Dec 06 10:01:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.043891223 +0000 UTC m=+0.025071142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.1417073 +0000 UTC m=+0.122887239 container init ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.149378626 +0000 UTC m=+0.130558535 container start ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.152424357 +0000 UTC m=+0.133604276 container attach ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 10:01:27 compute-0 awesome_mestorf[255777]: 167 167
Dec 06 10:01:27 compute-0 systemd[1]: libpod-ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9.scope: Deactivated successfully.
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.155194702 +0000 UTC m=+0.136374621 container died ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c34ffe59b0f026ab1d7c05d50e62cd8217bde59d9f2a2eb0b64652b89d2d05e-merged.mount: Deactivated successfully.
Dec 06 10:01:27 compute-0 podman[255760]: 2025-12-06 10:01:27.197508954 +0000 UTC m=+0.178688873 container remove ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_mestorf, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:01:27 compute-0 systemd[1]: libpod-conmon-ca331978f3b959165794826180d97e8b8aadf4ee6d4e14dab245d7d874c34ef9.scope: Deactivated successfully.
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.413546685 +0000 UTC m=+0.092629810 container create 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.344665132 +0000 UTC m=+0.023748287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:27 compute-0 systemd[1]: Started libpod-conmon-41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167.scope.
Dec 06 10:01:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18db7853b930b0d79be38f0eeffa445bf333148e3d5a812faefcda42b0a2517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18db7853b930b0d79be38f0eeffa445bf333148e3d5a812faefcda42b0a2517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18db7853b930b0d79be38f0eeffa445bf333148e3d5a812faefcda42b0a2517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18db7853b930b0d79be38f0eeffa445bf333148e3d5a812faefcda42b0a2517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.536756062 +0000 UTC m=+0.215839227 container init 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.55089316 +0000 UTC m=+0.229976315 container start 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.55687442 +0000 UTC m=+0.235957725 container attach 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 10:01:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]: {
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:     "1": [
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:         {
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "devices": [
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "/dev/loop3"
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             ],
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "lv_name": "ceph_lv0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "lv_size": "21470642176",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "name": "ceph_lv0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "tags": {
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.cluster_name": "ceph",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.crush_device_class": "",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.encrypted": "0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.osd_id": "1",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.type": "block",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.vdo": "0",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:                 "ceph.with_tpm": "0"
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             },
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "type": "block",
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:             "vg_name": "ceph_vg0"
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:         }
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]:     ]
Dec 06 10:01:27 compute-0 optimistic_blackburn[255816]: }
Dec 06 10:01:27 compute-0 systemd[1]: libpod-41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167.scope: Deactivated successfully.
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.889633065 +0000 UTC m=+0.568716200 container died 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18db7853b930b0d79be38f0eeffa445bf333148e3d5a812faefcda42b0a2517-merged.mount: Deactivated successfully.
Dec 06 10:01:27 compute-0 podman[255799]: 2025-12-06 10:01:27.938146234 +0000 UTC m=+0.617229359 container remove 41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:01:27 compute-0 systemd[1]: libpod-conmon-41681b6b257c9bf359d50808ded2f61c889cd33424fa4c8dfb90a3a62de54167.scope: Deactivated successfully.
Dec 06 10:01:27 compute-0 sudo[255693]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:28 compute-0 ceph-mon[74327]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:28 compute-0 sudo[255837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:28 compute-0 sudo[255837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:28 compute-0 sudo[255837]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:28 compute-0 sudo[255862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:01:28 compute-0 sudo[255862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004630 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:01:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.535424887 +0000 UTC m=+0.041278326 container create 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 06 10:01:28 compute-0 systemd[1]: Started libpod-conmon-38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab.scope.
Dec 06 10:01:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.520062325 +0000 UTC m=+0.025915794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.623293818 +0000 UTC m=+0.129147287 container init 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.630663385 +0000 UTC m=+0.136516824 container start 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.634316313 +0000 UTC m=+0.140169752 container attach 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:01:28 compute-0 cranky_lederberg[255942]: 167 167
Dec 06 10:01:28 compute-0 systemd[1]: libpod-38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab.scope: Deactivated successfully.
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.639335797 +0000 UTC m=+0.145189236 container died 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea81817d7dac92f471353eb473264577862b634925cd179730f1d14ffe859043-merged.mount: Deactivated successfully.
Dec 06 10:01:28 compute-0 podman[255926]: 2025-12-06 10:01:28.677962981 +0000 UTC m=+0.183816420 container remove 38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_lederberg, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:01:28 compute-0 systemd[1]: libpod-conmon-38a6b37f220dedad90b163aede8b34581451b88bd19acecbb2a5b0697a2208ab.scope: Deactivated successfully.
Dec 06 10:01:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:28 compute-0 podman[255966]: 2025-12-06 10:01:28.845252487 +0000 UTC m=+0.046276969 container create b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:01:28 compute-0 systemd[1]: Started libpod-conmon-b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361.scope.
Dec 06 10:01:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58037e974751e3a070740fd9654b46aa02fd214c91db36cebe752d215be1fcff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58037e974751e3a070740fd9654b46aa02fd214c91db36cebe752d215be1fcff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58037e974751e3a070740fd9654b46aa02fd214c91db36cebe752d215be1fcff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58037e974751e3a070740fd9654b46aa02fd214c91db36cebe752d215be1fcff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:28 compute-0 podman[255966]: 2025-12-06 10:01:28.827273407 +0000 UTC m=+0.028297919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:01:28 compute-0 podman[255966]: 2025-12-06 10:01:28.922856894 +0000 UTC m=+0.123881386 container init b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:01:28 compute-0 podman[255966]: 2025-12-06 10:01:28.928698251 +0000 UTC m=+0.129722733 container start b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 10:01:28 compute-0 podman[255966]: 2025-12-06 10:01:28.931750882 +0000 UTC m=+0.132775364 container attach b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 10:01:29 compute-0 ceph-mon[74327]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:29 compute-0 lvm[256059]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:01:29 compute-0 lvm[256059]: VG ceph_vg0 finished
Dec 06 10:01:29 compute-0 gallant_rhodes[255983]: {}
Dec 06 10:01:29 compute-0 systemd[1]: libpod-b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361.scope: Deactivated successfully.
Dec 06 10:01:29 compute-0 systemd[1]: libpod-b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361.scope: Consumed 1.179s CPU time.
Dec 06 10:01:29 compute-0 podman[255966]: 2025-12-06 10:01:29.651607996 +0000 UTC m=+0.852632478 container died b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-58037e974751e3a070740fd9654b46aa02fd214c91db36cebe752d215be1fcff-merged.mount: Deactivated successfully.
Dec 06 10:01:29 compute-0 podman[255966]: 2025-12-06 10:01:29.704661205 +0000 UTC m=+0.905685687 container remove b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_rhodes, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:01:29 compute-0 systemd[1]: libpod-conmon-b5d79314b4849113b318a05e9ba37feb8e0afcb12dbfa47df57d301fec5c8361.scope: Deactivated successfully.
Dec 06 10:01:29 compute-0 sudo[255862]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:01:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:01:29 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:29 compute-0 sudo[256077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:01:29 compute-0 sudo[256077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:29 compute-0 sudo[256077]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:30 compute-0 rsyslogd[1004]: imjournal: 4443 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 06 10:01:30 compute-0 sudo[256102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:01:30 compute-0 sudo[256102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:30 compute-0 sudo[256102]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 10:01:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 10:01:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:30 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:01:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:31 compute-0 podman[256128]: 2025-12-06 10:01:31.457551152 +0000 UTC m=+0.083681930 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:01:31 compute-0 ceph-mon[74327]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:32.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:32 compute-0 ceph-mon[74327]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:35 compute-0 ceph-mon[74327]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.751 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:01:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.990 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.990 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.991 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.991 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:01:35 compute-0 nova_compute[254819]: 2025-12-06 10:01:35.992 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.131 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.131 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.132 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.132 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.133 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:01:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:01:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682509999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:01:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.593 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:01:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.755 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.756 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.757 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.757 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.880 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.881 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:01:36 compute-0 nova_compute[254819]: 2025-12-06 10:01:36.958 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:01:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:37.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:37 compute-0 ceph-mon[74327]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2682509999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/244775073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3744631434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:01:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251141983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:37 compute-0 nova_compute[254819]: 2025-12-06 10:01:37.441 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:01:37 compute-0 nova_compute[254819]: 2025-12-06 10:01:37.446 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:01:37 compute-0 nova_compute[254819]: 2025-12-06 10:01:37.476 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:01:37 compute-0 nova_compute[254819]: 2025-12-06 10:01:37.477 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:01:37 compute-0 nova_compute[254819]: 2025-12-06 10:01:37.477 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1251141983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2859480221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1184462004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:01:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:38.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:01:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:39 compute-0 ceph-mon[74327]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:01:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:01:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:01:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:41 compute-0 ceph-mon[74327]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:43 compute-0 ceph-mon[74327]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:01:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:01:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:45 compute-0 ceph-mon[74327]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:01:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:45 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:01:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:01:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:47 compute-0 ceph-mon[74327]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:01:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 10:01:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:48.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:01:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:01:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:49 compute-0 ceph-mon[74327]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 06 10:01:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 10:01:50 compute-0 sudo[256212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:01:50 compute-0 sudo[256212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:01:50 compute-0 sudo[256212]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:50 compute-0 podman[256236]: 2025-12-06 10:01:50.437736281 +0000 UTC m=+0.062666827 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 10:01:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:01:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:50.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:01:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:01:51 compute-0 ceph-mon[74327]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 10:01:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:51 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:01:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47240048e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 10:01:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200026b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:53 compute-0 ceph-mon[74327]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 06 10:01:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:01:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:01:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:01:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004900 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.234 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:01:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:01:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:01:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:01:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 06 10:01:55 compute-0 podman[256262]: 2025-12-06 10:01:55.448393977 +0000 UTC m=+0.075582414 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 10:01:55 compute-0 ceph-mon[74327]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:01:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/1363450763' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 06 10:01:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2413463250' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 06 10:01:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200026d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:01:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:56 compute-0 ceph-mon[74327]: from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:01:56 compute-0 ceph-mon[74327]: from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:01:56 compute-0 ceph-mon[74327]: from='client.24628 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 06 10:01:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004920 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:01:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:01:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:01:57 compute-0 ceph-mon[74327]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:01:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100157 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:01:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:01:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:01:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:01:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:01:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:01:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:01:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:01:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:01:59 compute-0 ceph-mon[74327]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:02:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 06 10:02:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:01 compute-0 ceph-mon[74327]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 10:02:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724004960 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 10:02:02 compute-0 podman[256296]: 2025-12-06 10:02:02.437339602 +0000 UTC m=+0.063284735 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:02:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:03 compute-0 ceph-mon[74327]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 06 10:02:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 426 B/s wr, 131 op/s
Dec 06 10:02:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:04.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:05 compute-0 ceph-mon[74327]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 426 B/s wr, 131 op/s
Dec 06 10:02:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:07.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:02:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:07 compute-0 ceph-mon[74327]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4750001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:02:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:08 compute-0 ceph-mon[74327]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:10 compute-0 sudo[256325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:02:10 compute-0 sudo[256325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:10 compute-0 sudo[256325]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:02:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:02:10 compute-0 ceph-mon[74327]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:12.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:13 compute-0 ceph-mon[74327]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.24637 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:02:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 06 10:02:13 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432749316' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 06 10:02:13 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:14 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2729948875' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 ceph-mon[74327]: from='client.24637 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3432749316' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 ceph-mon[74327]: from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 ceph-mon[74327]: from='client.15012 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 06 10:02:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:15 compute-0 ceph-mon[74327]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 10:02:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:16.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:17.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:02:17 compute-0 ceph-mon[74327]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:19 compute-0 ceph-mon[74327]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c0019c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:20.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:02:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:02:21 compute-0 podman[256361]: 2025-12-06 10:02:21.444157394 +0000 UTC m=+0.076236991 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:02:21 compute-0 ceph-mon[74327]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:22.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:23 compute-0 ceph-mon[74327]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:02:23
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms']
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:02:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:02:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:02:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:02:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:02:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:25 compute-0 ceph-mon[74327]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f473000c650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:26 compute-0 podman[256386]: 2025-12-06 10:02:26.469530593 +0000 UTC m=+0.099908134 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:02:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:26 compute-0 ceph-mon[74327]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:27.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:02:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:28.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:29 compute-0 ceph-mon[74327]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:30 compute-0 sudo[256416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:30 compute-0 sudo[256416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:30 compute-0 sudo[256416]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:30 compute-0 sudo[256441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:02:30 compute-0 sudo[256441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:30 compute-0 sudo[256482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:02:30 compute-0 sudo[256482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:30 compute-0 sudo[256482]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:30 compute-0 sudo[256441]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:02:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:02:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:02:30 compute-0 sudo[256524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:02:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:02:30 compute-0 sudo[256524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:30 compute-0 sudo[256524]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:30 compute-0 sudo[256549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:02:30 compute-0 sudo[256549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:31 compute-0 ceph-mon[74327]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:02:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.412732723 +0000 UTC m=+0.047960605 container create a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 10:02:31 compute-0 systemd[1]: Started libpod-conmon-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope.
Dec 06 10:02:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.395289126 +0000 UTC m=+0.030517028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.493743681 +0000 UTC m=+0.128971583 container init a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.500969003 +0000 UTC m=+0.136196885 container start a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.50415993 +0000 UTC m=+0.139387812 container attach a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:31 compute-0 sharp_carson[256632]: 167 167
Dec 06 10:02:31 compute-0 systemd[1]: libpod-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope: Deactivated successfully.
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.506937614 +0000 UTC m=+0.142165496 container died a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-468e4e0dfeed2a93f0e8290b73e209f12cd9d0eefc2814b195248c1a64b5f565-merged.mount: Deactivated successfully.
Dec 06 10:02:31 compute-0 podman[256615]: 2025-12-06 10:02:31.561145564 +0000 UTC m=+0.196373496 container remove a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_carson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 10:02:31 compute-0 systemd[1]: libpod-conmon-a41770bafc3521659e79da732d55696cc8bca97d7524c461bdce4d3cb1dbf504.scope: Deactivated successfully.
Dec 06 10:02:31 compute-0 podman[256657]: 2025-12-06 10:02:31.798838215 +0000 UTC m=+0.079634913 container create a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:02:31 compute-0 systemd[1]: Started libpod-conmon-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope.
Dec 06 10:02:31 compute-0 podman[256657]: 2025-12-06 10:02:31.768115333 +0000 UTC m=+0.048912031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:31 compute-0 podman[256657]: 2025-12-06 10:02:31.917616743 +0000 UTC m=+0.198413451 container init a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:02:31 compute-0 podman[256657]: 2025-12-06 10:02:31.925376201 +0000 UTC m=+0.206172919 container start a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:02:31 compute-0 podman[256657]: 2025-12-06 10:02:31.929962624 +0000 UTC m=+0.210759302 container attach a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 10:02:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:32 compute-0 heuristic_lederberg[256674]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:02:32 compute-0 heuristic_lederberg[256674]: --> All data devices are unavailable
Dec 06 10:02:32 compute-0 systemd[1]: libpod-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope: Deactivated successfully.
Dec 06 10:02:32 compute-0 podman[256657]: 2025-12-06 10:02:32.247239574 +0000 UTC m=+0.528036302 container died a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 10:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d42bc2446bbb33cbecc57469797c9f1b2ba74836e19c799bf50d8f5a1ce506-merged.mount: Deactivated successfully.
Dec 06 10:02:32 compute-0 podman[256657]: 2025-12-06 10:02:32.300548491 +0000 UTC m=+0.581345179 container remove a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 10:02:32 compute-0 systemd[1]: libpod-conmon-a93a09358e1ce1ebf7887851951744ffcf7f11d68865734df89eb989d1b689b6.scope: Deactivated successfully.
Dec 06 10:02:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:32 compute-0 sudo[256549]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:32 compute-0 sudo[256703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:32 compute-0 sudo[256703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:32 compute-0 sudo[256703]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:32 compute-0 sudo[256728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:02:32 compute-0 sudo[256728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:32 compute-0 podman[256752]: 2025-12-06 10:02:32.594701992 +0000 UTC m=+0.074721641 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 10:02:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:32.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:32 compute-0 podman[256812]: 2025-12-06 10:02:32.904598404 +0000 UTC m=+0.043690549 container create 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 06 10:02:32 compute-0 systemd[1]: Started libpod-conmon-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope.
Dec 06 10:02:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:32 compute-0 podman[256812]: 2025-12-06 10:02:32.88533063 +0000 UTC m=+0.024422795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:32 compute-0 podman[256812]: 2025-12-06 10:02:32.990896114 +0000 UTC m=+0.129988279 container init 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:02:32 compute-0 podman[256812]: 2025-12-06 10:02:32.99934097 +0000 UTC m=+0.138433115 container start 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:02:33 compute-0 podman[256812]: 2025-12-06 10:02:33.002885755 +0000 UTC m=+0.141977920 container attach 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 06 10:02:33 compute-0 friendly_gauss[256828]: 167 167
Dec 06 10:02:33 compute-0 systemd[1]: libpod-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope: Deactivated successfully.
Dec 06 10:02:33 compute-0 podman[256812]: 2025-12-06 10:02:33.007033206 +0000 UTC m=+0.146125351 container died 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:02:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-39906dccb330242ee339529710e9c74ef976e86014fe9f2c9c519523e6ef5cc5-merged.mount: Deactivated successfully.
Dec 06 10:02:33 compute-0 podman[256812]: 2025-12-06 10:02:33.046503282 +0000 UTC m=+0.185595427 container remove 40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:02:33 compute-0 systemd[1]: libpod-conmon-40dab0f5e3a1a52a796b9048c17708456b0ec25ef3926aaf7c298d014402a11e.scope: Deactivated successfully.
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.218452643 +0000 UTC m=+0.046161126 container create b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:02:33 compute-0 systemd[1]: Started libpod-conmon-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope.
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.200600776 +0000 UTC m=+0.028309289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.313690782 +0000 UTC m=+0.141399275 container init b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.32183381 +0000 UTC m=+0.149542293 container start b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.325028286 +0000 UTC m=+0.152736769 container attach b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 10:02:33 compute-0 ceph-mon[74327]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]: {
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:     "1": [
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:         {
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "devices": [
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "/dev/loop3"
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             ],
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "lv_name": "ceph_lv0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "lv_size": "21470642176",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "name": "ceph_lv0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "tags": {
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.cluster_name": "ceph",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.crush_device_class": "",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.encrypted": "0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.osd_id": "1",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.type": "block",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.vdo": "0",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:                 "ceph.with_tpm": "0"
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             },
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "type": "block",
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:             "vg_name": "ceph_vg0"
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:         }
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]:     ]
Dec 06 10:02:33 compute-0 mystifying_ardinghelli[256870]: }
Dec 06 10:02:33 compute-0 systemd[1]: libpod-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope: Deactivated successfully.
Dec 06 10:02:33 compute-0 conmon[256870]: conmon b794c7ed814456680189 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope/container/memory.events
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.624120989 +0000 UTC m=+0.451829472 container died b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-941f383457af65d9f184e9c3a99452a3cfce5d5f397cbf3afdc6d4403e6b6f49-merged.mount: Deactivated successfully.
Dec 06 10:02:33 compute-0 podman[256853]: 2025-12-06 10:02:33.671931748 +0000 UTC m=+0.499640231 container remove b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:33 compute-0 systemd[1]: libpod-conmon-b794c7ed814456680189040256432609f58ea12f95e099f39ce015c284785635.scope: Deactivated successfully.
Dec 06 10:02:33 compute-0 sudo[256728]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:33 compute-0 sudo[256891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:33 compute-0 sudo[256891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:33 compute-0 sudo[256891]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:33 compute-0 sudo[256916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:02:33 compute-0 sudo[256916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.220794466 +0000 UTC m=+0.040115644 container create 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 10:02:34 compute-0 systemd[1]: Started libpod-conmon-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope.
Dec 06 10:02:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.294169989 +0000 UTC m=+0.113491187 container init 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.205048015 +0000 UTC m=+0.024369203 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.300586592 +0000 UTC m=+0.119907780 container start 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.304085515 +0000 UTC m=+0.123406743 container attach 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:02:34 compute-0 jovial_darwin[256998]: 167 167
Dec 06 10:02:34 compute-0 systemd[1]: libpod-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope: Deactivated successfully.
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.307015983 +0000 UTC m=+0.126337161 container died 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd8fd9b38b8c32c0a50b02e9d1f04a9f7bcda719f427b3aca6a5fca5e077c23-merged.mount: Deactivated successfully.
Dec 06 10:02:34 compute-0 podman[256981]: 2025-12-06 10:02:34.340636413 +0000 UTC m=+0.159957591 container remove 5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:02:34 compute-0 systemd[1]: libpod-conmon-5ea32c881c5362ce1259b0a334f815b2eff664960f4ded8f10725c9de0fe2a52.scope: Deactivated successfully.
Dec 06 10:02:34 compute-0 podman[257021]: 2025-12-06 10:02:34.521541665 +0000 UTC m=+0.038187054 container create a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:02:34 compute-0 systemd[1]: Started libpod-conmon-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope.
Dec 06 10:02:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:34 compute-0 podman[257021]: 2025-12-06 10:02:34.505245488 +0000 UTC m=+0.021890877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:02:34 compute-0 podman[257021]: 2025-12-06 10:02:34.608215214 +0000 UTC m=+0.124860633 container init a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:02:34 compute-0 podman[257021]: 2025-12-06 10:02:34.615466977 +0000 UTC m=+0.132112406 container start a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Dec 06 10:02:34 compute-0 podman[257021]: 2025-12-06 10:02:34.619816144 +0000 UTC m=+0.136461553 container attach a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 10:02:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:34.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:35 compute-0 lvm[257112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:02:35 compute-0 lvm[257112]: VG ceph_vg0 finished
Dec 06 10:02:35 compute-0 lucid_bouman[257037]: {}
Dec 06 10:02:35 compute-0 systemd[1]: libpod-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Deactivated successfully.
Dec 06 10:02:35 compute-0 systemd[1]: libpod-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Consumed 1.200s CPU time.
Dec 06 10:02:35 compute-0 podman[257021]: 2025-12-06 10:02:35.338518486 +0000 UTC m=+0.855163875 container died a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-933cf4d84fcce698979f58ffcddb25023e44f446c4f03acb1e3e4865db854e70-merged.mount: Deactivated successfully.
Dec 06 10:02:35 compute-0 podman[257021]: 2025-12-06 10:02:35.378820535 +0000 UTC m=+0.895465944 container remove a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:02:35 compute-0 systemd[1]: libpod-conmon-a0ce1ed230f03ee3a723c9bfefce787a4681cfd84a80b5d6b75cbb17e3b9b9f9.scope: Deactivated successfully.
Dec 06 10:02:35 compute-0 sudo[256916]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:02:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:02:35 compute-0 ceph-mon[74327]: pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:35 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:35 compute-0 sudo[257130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:35 compute-0 sudo[257130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:35 compute-0 sudo[257130]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:02:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:36.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:36.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:37.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.468 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 ceph-mon[74327]: pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.513 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.513 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.514 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.526 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.527 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.528 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.528 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.529 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.530 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.559 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.560 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:02:37 compute-0 nova_compute[254819]: 2025-12-06 10:02:37.561 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:02:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:02:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516502700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.039 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:02:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.231 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.233 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4937MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.233 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.234 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.296 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.297 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.317 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:02:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2516502700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:38.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:02:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280590991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.827 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.833 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.857 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.858 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:02:38 compute-0 nova_compute[254819]: 2025-12-06 10:02:38.859 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:02:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 nova_compute[254819]: 2025-12-06 10:02:39.080 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:39 compute-0 nova_compute[254819]: 2025-12-06 10:02:39.080 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:02:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2010034084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1539383251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2540342928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4280590991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1691930506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2010034084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:02:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:40.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:41 compute-0 ceph-mon[74327]: pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:42.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:43 compute-0 ceph-mon[74327]: pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:44.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:45 compute-0 ceph-mon[74327]: pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:02:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/795001534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:02:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/795001534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:02:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:47.233Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:02:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:47.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:02:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:48 compute-0 ceph-mon[74327]: pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100248 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:02:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:48.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:49 compute-0 ceph-mon[74327]: pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:50 compute-0 sudo[257215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:02:50 compute-0 sudo[257215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:02:50 compute-0 sudo[257215]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:50.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:02:51 compute-0 ceph-mon[74327]: pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:52 compute-0 podman[257242]: 2025-12-06 10:02:52.45275461 +0000 UTC m=+0.072517287 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:02:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:53 compute-0 ceph-mon[74327]: pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:02:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:02:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:02:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:02:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:02:54.235 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:02:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:02:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:02:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:55 compute-0 ceph-mon[74327]: pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:02:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:02:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:02:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:56.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:02:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:57.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:02:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:02:57.234Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:02:57 compute-0 podman[257267]: 2025-12-06 10:02:57.454072177 +0000 UTC m=+0.088092229 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:57 compute-0 ceph-mon[74327]: pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:02:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:02:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:02:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:02:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:02:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:02:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:02:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:02:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:02:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:02:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:02:59 compute-0 ceph-mon[74327]: pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:03:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:03:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:03:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:01 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:03:01 compute-0 ceph-mon[74327]: pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:02.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:03 compute-0 podman[257300]: 2025-12-06 10:03:03.41320624 +0000 UTC m=+0.050562965 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:03:03 compute-0 ceph-mon[74327]: pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:03:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:03:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:04.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:04.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:05 compute-0 ceph-mon[74327]: pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:03:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:03:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:06.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:06 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.954 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:03:06 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.956 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:03:06 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:06.957 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:03:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:07.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:07 compute-0 ceph-mon[74327]: pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:03:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:03:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:08.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:03:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:09 compute-0 ceph-mon[74327]: pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:03:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100310 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:03:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:10 compute-0 sudo[257328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:03:10 compute-0 sudo[257328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:10 compute-0 sudo[257328]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:03:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:03:11 compute-0 ceph-mon[74327]: pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:12.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:13 compute-0 ceph-mon[74327]: pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:14.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:15 compute-0 ceph-mon[74327]: pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:03:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:17.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:17 compute-0 ceph-mon[74327]: pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:19 compute-0 ceph-mon[74327]: pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:03:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:20.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:03:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec 06 10:03:21 compute-0 ceph-mon[74327]: pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:22.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200046e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:23 compute-0 podman[257366]: 2025-12-06 10:03:23.45792821 +0000 UTC m=+0.075685083 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:03:23 compute-0 ceph-mon[74327]: pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:03:23
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs', 'images', 'vms', '.rgw.root']
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:03:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:03:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:03:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:03:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:03:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:24.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:25 compute-0 ceph-mon[74327]: pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:26.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:27.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:27 compute-0 ceph-mon[74327]: pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:27 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 10:03:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:28 compute-0 podman[257389]: 2025-12-06 10:03:28.45532544 +0000 UTC m=+0.093160964 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:03:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:28.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:29 compute-0 ceph-mon[74327]: pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:30.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:30 compute-0 sudo[257417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:03:30 compute-0 sudo[257417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:30 compute-0 sudo[257417]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:03:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Dec 06 10:03:31 compute-0 ceph-mon[74327]: pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:32.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:33 compute-0 ceph-mon[74327]: pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004760 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:34 compute-0 podman[257446]: 2025-12-06 10:03:34.208396652 +0000 UTC m=+0.047840592 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:03:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:34.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:35 compute-0 nova_compute[254819]: 2025-12-06 10:03:35.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:35 compute-0 sudo[257468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:35 compute-0 sudo[257468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:35 compute-0 sudo[257468]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:35 compute-0 ceph-mon[74327]: pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:35 compute-0 sudo[257493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:03:35 compute-0 sudo[257493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004780 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:36 compute-0 sudo[257493]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:03:36 compute-0 sudo[257551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:36 compute-0 sudo[257551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:36 compute-0 sudo[257551]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:36 compute-0 sudo[257576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:03:36 compute-0 sudo[257576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:36.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:03:36 compute-0 nova_compute[254819]: 2025-12-06 10:03:36.773 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:03:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:03:36 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.036589677 +0000 UTC m=+0.041644854 container create 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:03:37 compute-0 systemd[1]: Started libpod-conmon-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope.
Dec 06 10:03:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.109439763 +0000 UTC m=+0.114494980 container init 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.015398036 +0000 UTC m=+0.020453393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.117627304 +0000 UTC m=+0.122682481 container start 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.120401239 +0000 UTC m=+0.125456466 container attach 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 10:03:37 compute-0 gallant_torvalds[257679]: 167 167
Dec 06 10:03:37 compute-0 systemd[1]: libpod-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope: Deactivated successfully.
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.124561621 +0000 UTC m=+0.129616828 container died 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 10:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8fa4ef833de826db19c2f441f77d62d91c7e2f15086dca401d84bab21596192-merged.mount: Deactivated successfully.
Dec 06 10:03:37 compute-0 podman[257663]: 2025-12-06 10:03:37.169247207 +0000 UTC m=+0.174302394 container remove 4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_torvalds, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:03:37 compute-0 systemd[1]: libpod-conmon-4ae24b56773542d25cc5a6e2606ce41987993a760cd616bcc01a0806717a17b6.scope: Deactivated successfully.
Dec 06 10:03:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:03:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391830354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.232 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:03:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:37.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.356808538 +0000 UTC m=+0.044983214 container create 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:03:37 compute-0 systemd[1]: Started libpod-conmon-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope.
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.337268681 +0000 UTC m=+0.025443397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.433 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:03:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.436 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4865MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.454631628 +0000 UTC m=+0.142806324 container init 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.461845962 +0000 UTC m=+0.150020648 container start 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.464574386 +0000 UTC m=+0.152749202 container attach 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.550 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.551 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:03:37 compute-0 nova_compute[254819]: 2025-12-06 10:03:37.569 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:03:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:37 compute-0 mystifying_lovelace[257721]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:03:37 compute-0 mystifying_lovelace[257721]: --> All data devices are unavailable
Dec 06 10:03:37 compute-0 systemd[1]: libpod-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope: Deactivated successfully.
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.824307984 +0000 UTC m=+0.512482680 container died 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4415406fc8c439408b0a7b1ec67ca974922c6a87924578e0dac95c6e949d989-merged.mount: Deactivated successfully.
Dec 06 10:03:37 compute-0 podman[257705]: 2025-12-06 10:03:37.874980221 +0000 UTC m=+0.563154907 container remove 9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:03:37 compute-0 ceph-mon[74327]: pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2391830354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:37 compute-0 systemd[1]: libpod-conmon-9d59fc11eb313ee69550f2ac27f5de9707783844ff4beb04babb49ab3e04e426.scope: Deactivated successfully.
Dec 06 10:03:37 compute-0 sudo[257576]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:37 compute-0 sudo[257769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:38 compute-0 sudo[257769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:03:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270802321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:38 compute-0 sudo[257769]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:38 compute-0 nova_compute[254819]: 2025-12-06 10:03:38.028 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:03:38 compute-0 nova_compute[254819]: 2025-12-06 10:03:38.034 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:03:38 compute-0 nova_compute[254819]: 2025-12-06 10:03:38.048 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:03:38 compute-0 nova_compute[254819]: 2025-12-06 10:03:38.050 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:03:38 compute-0 nova_compute[254819]: 2025-12-06 10:03:38.050 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:38 compute-0 sudo[257796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:03:38 compute-0 sudo[257796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4754009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.513323416 +0000 UTC m=+0.050544036 container create 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:03:38 compute-0 systemd[1]: Started libpod-conmon-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope.
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.493423478 +0000 UTC m=+0.030644098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.621232957 +0000 UTC m=+0.158453667 container init 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.63282586 +0000 UTC m=+0.170046510 container start 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:03:38 compute-0 elastic_hugle[257877]: 167 167
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.637431294 +0000 UTC m=+0.174651954 container attach 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:03:38 compute-0 systemd[1]: libpod-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope: Deactivated successfully.
Dec 06 10:03:38 compute-0 conmon[257877]: conmon 1d16e8bb005b0e1a2e5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope/container/memory.events
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.640061006 +0000 UTC m=+0.177281656 container died 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:03:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e732b3830518341a6b166182c596437093311d0bc7825d75664ede4ccaca28a-merged.mount: Deactivated successfully.
Dec 06 10:03:38 compute-0 podman[257861]: 2025-12-06 10:03:38.694315349 +0000 UTC m=+0.231535969 container remove 1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_hugle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:03:38 compute-0 systemd[1]: libpod-conmon-1d16e8bb005b0e1a2e5e2bbf18a3ff953337b4725d39870c98f557245f6cfa1e.scope: Deactivated successfully.
Dec 06 10:03:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:38 compute-0 podman[257900]: 2025-12-06 10:03:38.886240789 +0000 UTC m=+0.043604278 container create 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:03:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2255593849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3270802321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:38 compute-0 ceph-mon[74327]: pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2555214126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:03:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:38 compute-0 systemd[1]: Started libpod-conmon-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope.
Dec 06 10:03:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:38 compute-0 podman[257900]: 2025-12-06 10:03:38.868276473 +0000 UTC m=+0.025639982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:38 compute-0 podman[257900]: 2025-12-06 10:03:38.974611003 +0000 UTC m=+0.131974512 container init 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:03:38 compute-0 podman[257900]: 2025-12-06 10:03:38.984200862 +0000 UTC m=+0.141564351 container start 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:03:38 compute-0 podman[257900]: 2025-12-06 10:03:38.987819449 +0000 UTC m=+0.145182958 container attach 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.050 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.051 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.051 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.068 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.069 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:03:39 compute-0 adoring_goodall[257917]: {
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:     "1": [
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:         {
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "devices": [
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "/dev/loop3"
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             ],
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "lv_name": "ceph_lv0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "lv_size": "21470642176",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "name": "ceph_lv0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "tags": {
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.cluster_name": "ceph",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.crush_device_class": "",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.encrypted": "0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.osd_id": "1",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.type": "block",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.vdo": "0",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:                 "ceph.with_tpm": "0"
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             },
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "type": "block",
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:             "vg_name": "ceph_vg0"
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:         }
Dec 06 10:03:39 compute-0 adoring_goodall[257917]:     ]
Dec 06 10:03:39 compute-0 adoring_goodall[257917]: }
Dec 06 10:03:39 compute-0 systemd[1]: libpod-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope: Deactivated successfully.
Dec 06 10:03:39 compute-0 podman[257900]: 2025-12-06 10:03:39.273871568 +0000 UTC m=+0.431235057 container died 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 06 10:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-784f4b4fadee7f376aaf8f29cb04b6f9287f020b1b286ff8a1446547550bd5c5-merged.mount: Deactivated successfully.
Dec 06 10:03:39 compute-0 podman[257900]: 2025-12-06 10:03:39.322050578 +0000 UTC m=+0.479414067 container remove 2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:03:39 compute-0 systemd[1]: libpod-conmon-2870b06acfc2efcb0c594f479e98f1297f7ff5e7cea3e3dc289f9d62ef59f471.scope: Deactivated successfully.
Dec 06 10:03:39 compute-0 sudo[257796]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:39 compute-0 sudo[257940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:39 compute-0 sudo[257940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:39 compute-0 sudo[257940]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:39 compute-0 sudo[257966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:03:39 compute-0 sudo[257966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:39 compute-0 nova_compute[254819]: 2025-12-06 10:03:39.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3965991936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:39 compute-0 podman[258031]: 2025-12-06 10:03:39.930375382 +0000 UTC m=+0.051969892 container create e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 10:03:39 compute-0 systemd[1]: Started libpod-conmon-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope.
Dec 06 10:03:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:39 compute-0 podman[258031]: 2025-12-06 10:03:39.90656097 +0000 UTC m=+0.028155520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:40 compute-0 podman[258031]: 2025-12-06 10:03:40.016751893 +0000 UTC m=+0.138346413 container init e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:03:40 compute-0 podman[258031]: 2025-12-06 10:03:40.029426696 +0000 UTC m=+0.151021196 container start e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:03:40 compute-0 podman[258031]: 2025-12-06 10:03:40.033236608 +0000 UTC m=+0.154831138 container attach e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:03:40 compute-0 laughing_vaughan[258048]: 167 167
Dec 06 10:03:40 compute-0 systemd[1]: libpod-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope: Deactivated successfully.
Dec 06 10:03:40 compute-0 podman[258031]: 2025-12-06 10:03:40.034934544 +0000 UTC m=+0.156529044 container died e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d6dcd015579376f6be1ce1d725de4f82563e5efdb900858466230d697873ff2-merged.mount: Deactivated successfully.
Dec 06 10:03:40 compute-0 podman[258031]: 2025-12-06 10:03:40.083993088 +0000 UTC m=+0.205587588 container remove e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:03:40 compute-0 systemd[1]: libpod-conmon-e9400f1d2db7c9994685abd3ae5b7fa596e1fd588d549b851f38ecdc263999d4.scope: Deactivated successfully.
Dec 06 10:03:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:40 compute-0 podman[258072]: 2025-12-06 10:03:40.288229929 +0000 UTC m=+0.061352457 container create 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:03:40 compute-0 systemd[1]: Started libpod-conmon-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope.
Dec 06 10:03:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:40 compute-0 podman[258072]: 2025-12-06 10:03:40.258736013 +0000 UTC m=+0.031858601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:03:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:40 compute-0 podman[258072]: 2025-12-06 10:03:40.399071041 +0000 UTC m=+0.172193599 container init 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 10:03:40 compute-0 podman[258072]: 2025-12-06 10:03:40.406228743 +0000 UTC m=+0.179351261 container start 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:03:40 compute-0 podman[258072]: 2025-12-06 10:03:40.409973945 +0000 UTC m=+0.183096463 container attach 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:03:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:40.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:03:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:03:40 compute-0 ceph-mon[74327]: pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3811939761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:03:41 compute-0 lvm[258165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:03:41 compute-0 lvm[258165]: VG ceph_vg0 finished
Dec 06 10:03:41 compute-0 agitated_shockley[258089]: {}
Dec 06 10:03:41 compute-0 systemd[1]: libpod-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Deactivated successfully.
Dec 06 10:03:41 compute-0 systemd[1]: libpod-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Consumed 1.300s CPU time.
Dec 06 10:03:41 compute-0 podman[258072]: 2025-12-06 10:03:41.162592423 +0000 UTC m=+0.935714951 container died 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c7d2e5b1e3b022a8e4f04651b8f3df83daf854c2bb4358cd1d2a07d610d39d0-merged.mount: Deactivated successfully.
Dec 06 10:03:41 compute-0 podman[258072]: 2025-12-06 10:03:41.217071713 +0000 UTC m=+0.990194221 container remove 699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:03:41 compute-0 systemd[1]: libpod-conmon-699e7265ae2cc04e714cb6a68f7b6b495f7c68db3281b507d8746455ec2667de.scope: Deactivated successfully.
Dec 06 10:03:41 compute-0 sudo[257966]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:03:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:03:41 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:41 compute-0 sudo[258181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:41 compute-0 sudo[258181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:41 compute-0 sudo[258181]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:03:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:43 compute-0 ceph-mon[74327]: pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:45 compute-0 ceph-mon[74327]: pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2191484556' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:03:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2191484556' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:03:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:46.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:47.239Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:03:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:47.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:47 compute-0 ceph-mon[74327]: pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:49 compute-0 ceph-mon[74327]: pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:03:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:50.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:03:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:50 compute-0 sudo[258216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:03:50 compute-0 sudo[258216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:03:50 compute-0 sudo[258216]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:03:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:03:51 compute-0 ceph-mon[74327]: pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:52.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:53 compute-0 ceph-mon[74327]: pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:03:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:03:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:03:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.236 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:03:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:54 compute-0 podman[258245]: 2025-12-06 10:03:54.481607195 +0000 UTC m=+0.100951035 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 06 10:03:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:03:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:55 compute-0 ceph-mon[74327]: pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:03:57.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:03:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:03:57 compute-0 ceph-mon[74327]: pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:03:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:03:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:03:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:03:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:03:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:03:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:03:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:03:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:03:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:03:59 compute-0 podman[258270]: 2025-12-06 10:03:59.509988942 +0000 UTC m=+0.138773887 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:03:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:03:59.586882406Z level=info msg="Completed cleanup jobs" duration=22.684302ms
Dec 06 10:03:59 compute-0 ceph-mon[74327]: pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:03:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:03:59.701658563Z level=info msg="Update check succeeded" duration=75.390685ms
Dec 06 10:03:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:03:59.717774508Z level=info msg="Update check succeeded" duration=52.548998ms
Dec 06 10:04:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47200047e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:00.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:04:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Dec 06 10:04:01 compute-0 ceph-mon[74327]: pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004800 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:03 compute-0 ceph-mon[74327]: pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:04 compute-0 podman[258301]: 2025-12-06 10:04:04.40950293 +0000 UTC m=+0.044294917 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:04:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:04.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.792134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444792200, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 4037785, "memory_usage": 4094512, "flush_reason": "Manual Compaction"}
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 06 10:04:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444824674, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 3957290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20102, "largest_seqno": 22208, "table_properties": {"data_size": 3947958, "index_size": 5826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19302, "raw_average_key_size": 20, "raw_value_size": 3929213, "raw_average_value_size": 4084, "num_data_blocks": 257, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015229, "oldest_key_time": 1765015229, "file_creation_time": 1765015444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 32572 microseconds, and 10396 cpu microseconds.
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.824719) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 3957290 bytes OK
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.824737) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830029) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830041) EVENT_LOG_v1 {"time_micros": 1765015444830038, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830085) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4029193, prev total WAL file size 4029193, number of live WAL files 2.
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.831042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3864KB)], [44(13MB)]
Dec 06 10:04:04 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444831072, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17911753, "oldest_snapshot_seqno": -1}
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5479 keys, 15736574 bytes, temperature: kUnknown
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445002704, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15736574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15697085, "index_size": 24659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138046, "raw_average_key_size": 25, "raw_value_size": 15595058, "raw_average_value_size": 2846, "num_data_blocks": 1018, "num_entries": 5479, "num_filter_entries": 5479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015444, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.003057) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15736574 bytes
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.004638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.3 rd, 91.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 13.3 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 5995, records dropped: 516 output_compression: NoCompression
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.004663) EVENT_LOG_v1 {"time_micros": 1765015445004651, "job": 22, "event": "compaction_finished", "compaction_time_micros": 171742, "compaction_time_cpu_micros": 29810, "output_level": 6, "num_output_files": 1, "total_output_size": 15736574, "num_input_records": 5995, "num_output_records": 5479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445005988, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015445009364, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:04.830940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:05.009471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:05 compute-0 ceph-mon[74327]: pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:06 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:07.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:04:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:07 compute-0 ceph-mon[74327]: pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:04:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:08 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:08 compute-0 ceph-mon[74327]: pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:04:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:10.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:10 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720004840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 10:04:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 10:04:10 compute-0 ceph-mon[74327]: pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:10 compute-0 sudo[258328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:04:10 compute-0 sudo[258328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:10 compute-0 sudo[258328]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:04:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:12 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:13 compute-0 ceph-mon[74327]: pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:14 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:15 compute-0 ceph-mon[74327]: pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47300014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:16.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:04:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:16 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:17.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:04:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:17.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:04:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:17 compute-0 ceph-mon[74327]: pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:18 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:18 compute-0 ceph-mon[74327]: pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:20.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:20 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 10:04:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Dec 06 10:04:21 compute-0 ceph-mon[74327]: pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:22 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:23 compute-0 ceph-mon[74327]: pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:04:23
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.nfs', 'vms']
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:04:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:04:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:04:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:04:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:04:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:04:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:24.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:24 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:25 compute-0 podman[258370]: 2025-12-06 10:04:25.431345255 +0000 UTC m=+0.060253467 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:04:25 compute-0 ceph-mon[74327]: pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:26 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:27.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:04:27 compute-0 ceph-mon[74327]: pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:28.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:28.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:28 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:29 compute-0 ceph-mon[74327]: pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:30 compute-0 podman[258395]: 2025-12-06 10:04:30.450082009 +0000 UTC m=+0.080791291 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 10:04:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:30 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:04:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Dec 06 10:04:31 compute-0 sudo[258422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:04:31 compute-0 sudo[258422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:31 compute-0 sudo[258422]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:31 compute-0 ceph-mon[74327]: pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:32.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:32 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:33 compute-0 ceph-mon[74327]: pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:34.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:34.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:34 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:35 compute-0 podman[258452]: 2025-12-06 10:04:35.417367185 +0000 UTC m=+0.048340625 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:04:35 compute-0 ceph-mon[74327]: pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:36 compute-0 nova_compute[254819]: 2025-12-06 10:04:36.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:36.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:36 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:37.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:04:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.714034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477714070, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 499, "num_deletes": 251, "total_data_size": 561369, "memory_usage": 569888, "flush_reason": "Manual Compaction"}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477718234, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 393629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22210, "largest_seqno": 22707, "table_properties": {"data_size": 391065, "index_size": 600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6539, "raw_average_key_size": 19, "raw_value_size": 385955, "raw_average_value_size": 1148, "num_data_blocks": 27, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015445, "oldest_key_time": 1765015445, "file_creation_time": 1765015477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4225 microseconds, and 1586 cpu microseconds.
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.718264) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 393629 bytes OK
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.718277) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721421) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721436) EVENT_LOG_v1 {"time_micros": 1765015477721430, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 558525, prev total WAL file size 558525, number of live WAL files 2.
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(384KB)], [47(15MB)]
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477722065, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16130203, "oldest_snapshot_seqno": -1}
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:04:37 compute-0 nova_compute[254819]: 2025-12-06 10:04:37.777 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5315 keys, 12216148 bytes, temperature: kUnknown
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477864994, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12216148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12182049, "index_size": 19717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135017, "raw_average_key_size": 25, "raw_value_size": 12087106, "raw_average_value_size": 2274, "num_data_blocks": 802, "num_entries": 5315, "num_filter_entries": 5315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015477, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.865423) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12216148 bytes
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.867454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.7 rd, 85.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 15.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(72.0) write-amplify(31.0) OK, records in: 5815, records dropped: 500 output_compression: NoCompression
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.867506) EVENT_LOG_v1 {"time_micros": 1765015477867470, "job": 24, "event": "compaction_finished", "compaction_time_micros": 143173, "compaction_time_cpu_micros": 36491, "output_level": 6, "num_output_files": 1, "total_output_size": 12216148, "num_input_records": 5815, "num_output_records": 5315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477868206, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015477872442, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.721887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:04:37.872712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:37 compute-0 ceph-mon[74327]: pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.242 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:04:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.422 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.423 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.424 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.424 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.481 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.482 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.496 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=404 latency=0.002000053s ======
Dec 06 10:04:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:38.725 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000053s
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - - [06/Dec/2025:10:04:38.740 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000027s
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:38.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:38.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:38 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:04:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:04:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870583344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.964 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.970 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.987 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.989 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:04:38 compute-0 nova_compute[254819]: 2025-12-06 10:04:38.989 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2156903250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3692914369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:39 compute-0 ceph-mon[74327]: pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:04:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2295503369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/870583344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:39 compute-0 nova_compute[254819]: 2025-12-06 10:04:39.983 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.007 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:04:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1352342801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1282252340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.035 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 nova_compute[254819]: 2025-12-06 10:04:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:40.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:40 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:04:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:04:41 compute-0 ceph-mon[74327]: pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:41 compute-0 sudo[258522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:41 compute-0 sudo[258522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:41 compute-0 sudo[258522]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:41 compute-0 sudo[258547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:04:41 compute-0 sudo[258547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:42 compute-0 sudo[258547]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 10:04:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:42 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:42.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:42 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f475400b340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 06 10:04:43 compute-0 ceph-mon[74327]: pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:04:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 06 10:04:43 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 06 10:04:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:04:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:04:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f47500025a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 06 10:04:44 compute-0 ceph-mon[74327]: osdmap e147: 3 total, 3 up, 3 in
Dec 06 10:04:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:04:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:04:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:44.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:44 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:44 compute-0 sudo[258606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:44 compute-0 sudo[258606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:44 compute-0 sudo[258606]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:44 compute-0 sudo[258631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:04:44 compute-0 sudo[258631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.341724224 +0000 UTC m=+0.052374605 container create 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:04:45 compute-0 systemd[1]: Started libpod-conmon-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope.
Dec 06 10:04:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.320563072 +0000 UTC m=+0.031213493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.426352127 +0000 UTC m=+0.137002588 container init 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.435811933 +0000 UTC m=+0.146462304 container start 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.43904409 +0000 UTC m=+0.149694541 container attach 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:04:45 compute-0 eager_franklin[258715]: 167 167
Dec 06 10:04:45 compute-0 systemd[1]: libpod-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope: Deactivated successfully.
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.444742994 +0000 UTC m=+0.155393385 container died 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3062be52938bac2a7a03f0815012980e16a102ecdf7b6af112a7cabe0ffebf43-merged.mount: Deactivated successfully.
Dec 06 10:04:45 compute-0 podman[258699]: 2025-12-06 10:04:45.489296136 +0000 UTC m=+0.199946507 container remove 5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_franklin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:04:45 compute-0 systemd[1]: libpod-conmon-5303f4a347e2f7e13e6be8d3ef61d9fb36f321e3cb1018f274dab817d7bbd5c0.scope: Deactivated successfully.
Dec 06 10:04:45 compute-0 ceph-mon[74327]: pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Dec 06 10:04:45 compute-0 ceph-mon[74327]: osdmap e148: 3 total, 3 up, 3 in
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:04:45 compute-0 podman[258739]: 2025-12-06 10:04:45.660615029 +0000 UTC m=+0.051697377 container create dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:04:45 compute-0 systemd[1]: Started libpod-conmon-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope.
Dec 06 10:04:45 compute-0 podman[258739]: 2025-12-06 10:04:45.634528295 +0000 UTC m=+0.025610733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:45 compute-0 podman[258739]: 2025-12-06 10:04:45.748944782 +0000 UTC m=+0.140027150 container init dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 06 10:04:45 compute-0 podman[258739]: 2025-12-06 10:04:45.758527721 +0000 UTC m=+0.149610069 container start dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:04:45 compute-0 podman[258739]: 2025-12-06 10:04:45.762322003 +0000 UTC m=+0.153404411 container attach dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:04:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 06 10:04:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 06 10:04:45 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 06 10:04:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:04:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:04:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:04:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:04:46 compute-0 friendly_rubin[258756]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:04:46 compute-0 friendly_rubin[258756]: --> All data devices are unavailable
Dec 06 10:04:46 compute-0 systemd[1]: libpod-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope: Deactivated successfully.
Dec 06 10:04:46 compute-0 podman[258739]: 2025-12-06 10:04:46.09834964 +0000 UTC m=+0.489432028 container died dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-eba2295cbc7c60b0eaf1ca5d69eed4eec04d8d0f25fd96897d9360bdb28dd364-merged.mount: Deactivated successfully.
Dec 06 10:04:46 compute-0 podman[258739]: 2025-12-06 10:04:46.146344945 +0000 UTC m=+0.537427313 container remove dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 10:04:46 compute-0 systemd[1]: libpod-conmon-dd57a2d39502278aacc9e4a6b7006d6af987085d9f20c6a964032827e3dd70a7.scope: Deactivated successfully.
Dec 06 10:04:46 compute-0 sudo[258631]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:46 compute-0 sudo[258786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:46 compute-0 sudo[258786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:46 compute-0 sudo[258786]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:46 compute-0 sudo[258811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:04:46 compute-0 sudo[258811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:04:46 compute-0 podman[258876]: 2025-12-06 10:04:46.78614692 +0000 UTC m=+0.042853168 container create d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:04:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:04:46 compute-0 systemd[1]: Started libpod-conmon-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope.
Dec 06 10:04:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:46 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:46 compute-0 podman[258876]: 2025-12-06 10:04:46.766844239 +0000 UTC m=+0.023550517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:47 compute-0 ceph-mon[74327]: osdmap e149: 3 total, 3 up, 3 in
Dec 06 10:04:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:04:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3880271287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:04:47 compute-0 podman[258876]: 2025-12-06 10:04:47.136675519 +0000 UTC m=+0.393381797 container init d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:04:47 compute-0 podman[258876]: 2025-12-06 10:04:47.146978896 +0000 UTC m=+0.403685154 container start d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:04:47 compute-0 podman[258876]: 2025-12-06 10:04:47.150526793 +0000 UTC m=+0.407233061 container attach d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:04:47 compute-0 amazing_satoshi[258892]: 167 167
Dec 06 10:04:47 compute-0 systemd[1]: libpod-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope: Deactivated successfully.
Dec 06 10:04:47 compute-0 podman[258876]: 2025-12-06 10:04:47.154657894 +0000 UTC m=+0.411364172 container died d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 10:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6667462582bc4ce1e1e541ee0d4b14f983ea6d061537e775481d3f8ed3d45c73-merged.mount: Deactivated successfully.
Dec 06 10:04:47 compute-0 podman[258876]: 2025-12-06 10:04:47.198777915 +0000 UTC m=+0.455484163 container remove d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:04:47 compute-0 systemd[1]: libpod-conmon-d032418259ffdf4143aed5785371550c152d33cfbb2a643561c5c73d738cfc81.scope: Deactivated successfully.
Dec 06 10:04:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:47.247Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:04:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:47.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:04:47 compute-0 podman[258918]: 2025-12-06 10:04:47.360768945 +0000 UTC m=+0.038437848 container create f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 10:04:47 compute-0 systemd[1]: Started libpod-conmon-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope.
Dec 06 10:04:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:47 compute-0 podman[258918]: 2025-12-06 10:04:47.427926716 +0000 UTC m=+0.105595629 container init f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 10:04:47 compute-0 podman[258918]: 2025-12-06 10:04:47.433980009 +0000 UTC m=+0.111648932 container start f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:04:47 compute-0 podman[258918]: 2025-12-06 10:04:47.438081499 +0000 UTC m=+0.115750402 container attach f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:04:47 compute-0 podman[258918]: 2025-12-06 10:04:47.345054711 +0000 UTC m=+0.022723614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:47 compute-0 stupefied_morse[258935]: {
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:     "1": [
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:         {
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "devices": [
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "/dev/loop3"
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             ],
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "lv_name": "ceph_lv0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "lv_size": "21470642176",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "name": "ceph_lv0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "tags": {
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.cluster_name": "ceph",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.crush_device_class": "",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.encrypted": "0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.osd_id": "1",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.type": "block",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.vdo": "0",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:                 "ceph.with_tpm": "0"
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             },
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "type": "block",
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:             "vg_name": "ceph_vg0"
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:         }
Dec 06 10:04:47 compute-0 stupefied_morse[258935]:     ]
Dec 06 10:04:47 compute-0 stupefied_morse[258935]: }
Dec 06 10:04:47 compute-0 systemd[1]: libpod-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope: Deactivated successfully.
Dec 06 10:04:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:47 compute-0 podman[258945]: 2025-12-06 10:04:47.746172998 +0000 UTC m=+0.024206083 container died f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 10:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c04d17e4ce926a331d9fef412fe02b965b9f37858282b8e8c204fc52a526abdd-merged.mount: Deactivated successfully.
Dec 06 10:04:47 compute-0 podman[258945]: 2025-12-06 10:04:47.795609581 +0000 UTC m=+0.073642656 container remove f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:04:47 compute-0 systemd[1]: libpod-conmon-f3cdde4d11ce6d825eadf262ba6783049bf3a3ed88adf280c0415b04a44f66af.scope: Deactivated successfully.
Dec 06 10:04:47 compute-0 sudo[258811]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:47 compute-0 sudo[258961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:47 compute-0 sudo[258961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:47 compute-0 sudo[258961]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:47 compute-0 sudo[258986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:04:47 compute-0 sudo[258986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:48 compute-0 ceph-mon[74327]: pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:04:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 06 10:04:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 06 10:04:48 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 06 10:04:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.389035865 +0000 UTC m=+0.042142068 container create 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:04:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 8.3 MiB/s wr, 68 op/s
Dec 06 10:04:48 compute-0 systemd[1]: Started libpod-conmon-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope.
Dec 06 10:04:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.372640073 +0000 UTC m=+0.025746296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.469979538 +0000 UTC m=+0.123085751 container init 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.476491773 +0000 UTC m=+0.129597976 container start 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.48081837 +0000 UTC m=+0.133924593 container attach 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:04:48 compute-0 confident_hertz[259069]: 167 167
Dec 06 10:04:48 compute-0 systemd[1]: libpod-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope: Deactivated successfully.
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.483566944 +0000 UTC m=+0.136673187 container died 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-903912ea43a427dd923d88a56fa2a0ed5e1af5a61a7615237329fd76357dbcb9-merged.mount: Deactivated successfully.
Dec 06 10:04:48 compute-0 podman[259052]: 2025-12-06 10:04:48.531154228 +0000 UTC m=+0.184260441 container remove 4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:04:48 compute-0 systemd[1]: libpod-conmon-4c0167ef16ca51b9e9d77ea8780d0950c06841be2798e79c362abfd181337535.scope: Deactivated successfully.
Dec 06 10:04:48 compute-0 podman[259093]: 2025-12-06 10:04:48.705322935 +0000 UTC m=+0.055305673 container create 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 10:04:48 compute-0 systemd[1]: Started libpod-conmon-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope.
Dec 06 10:04:48 compute-0 podman[259093]: 2025-12-06 10:04:48.680917646 +0000 UTC m=+0.030900404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:04:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:04:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:48.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:04:48 compute-0 podman[259093]: 2025-12-06 10:04:48.807363356 +0000 UTC m=+0.157346084 container init 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:04:48 compute-0 podman[259093]: 2025-12-06 10:04:48.818138937 +0000 UTC m=+0.168121665 container start 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:04:48 compute-0 podman[259093]: 2025-12-06 10:04:48.822551396 +0000 UTC m=+0.172534194 container attach 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec 06 10:04:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:48 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:49 compute-0 ceph-mon[74327]: osdmap e150: 3 total, 3 up, 3 in
Dec 06 10:04:49 compute-0 ceph-mon[74327]: pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 8.3 MiB/s wr, 68 op/s
Dec 06 10:04:49 compute-0 lvm[259186]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:04:49 compute-0 lvm[259186]: VG ceph_vg0 finished
Dec 06 10:04:49 compute-0 upbeat_varahamihira[259110]: {}
Dec 06 10:04:49 compute-0 systemd[1]: libpod-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Deactivated successfully.
Dec 06 10:04:49 compute-0 systemd[1]: libpod-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Consumed 1.209s CPU time.
Dec 06 10:04:49 compute-0 podman[259093]: 2025-12-06 10:04:49.602058868 +0000 UTC m=+0.952041576 container died 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-437206563ac1df4174920cbe452782b112617d3d5963c0f0bfd7aee5df96eee2-merged.mount: Deactivated successfully.
Dec 06 10:04:49 compute-0 podman[259093]: 2025-12-06 10:04:49.647103802 +0000 UTC m=+0.997086510 container remove 0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:04:49 compute-0 systemd[1]: libpod-conmon-0ef3eecedb91c2fd8b389871ca85d196820a595e39fe0d1d293e82d9a6c515f9.scope: Deactivated successfully.
Dec 06 10:04:49 compute-0 sudo[258986]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:04:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:04:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:49 compute-0 sudo[259201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:04:49 compute-0 sudo[259201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:49 compute-0 sudo[259201]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 MiB/s wr, 56 op/s
Dec 06 10:04:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:50 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:50 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:04:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:04:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:04:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:50 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:04:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Dec 06 10:04:51 compute-0 sudo[259226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:04:51 compute-0 sudo[259226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:04:51 compute-0 sudo[259226]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:51 compute-0 ceph-mon[74327]: pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 MiB/s wr, 56 op/s
Dec 06 10:04:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c003b70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Dec 06 10:04:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 06 10:04:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 06 10:04:52 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 06 10:04:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:52 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:04:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:04:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:04:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:54 compute-0 ceph-mon[74327]: pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Dec 06 10:04:54 compute-0 ceph-mon[74327]: osdmap e151: 3 total, 3 up, 3 in
Dec 06 10:04:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:04:54.237 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec 06 10:04:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:54 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:04:55 compute-0 ceph-mon[74327]: pgmap v707: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec 06 10:04:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 621 B/s wr, 5 op/s
Dec 06 10:04:56 compute-0 podman[259257]: 2025-12-06 10:04:56.430306351 +0000 UTC m=+0.060927734 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:04:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:56.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:56 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:04:57.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:04:57 compute-0 ceph-mon[74327]: pgmap v708: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 621 B/s wr, 5 op/s
Dec 06 10:04:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:04:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:04:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:04:58.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:04:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:04:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:04:58.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:04:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:04:58 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:04:59 compute-0 ceph-mon[74327]: pgmap v709: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:05:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:05:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:00.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:00 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001d90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Dec 06 10:05:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Dec 06 10:05:01 compute-0 podman[259283]: 2025-12-06 10:05:01.469428236 +0000 UTC m=+0.094102329 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:05:01 compute-0 ceph-mon[74327]: pgmap v710: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:05:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:05:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:02.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:02 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f472c004490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:03 compute-0 ceph-mon[74327]: pgmap v711: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 4 op/s
Dec 06 10:05:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4720001db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_44] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4730003420 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 438 B/s wr, 4 op/s
Dec 06 10:05:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[213774]: 06/12/2025 10:05:04 : epoch 6933fdbc : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4724003610 fd 42 proxy ignored for local
Dec 06 10:05:04 compute-0 kernel: ganesha.nfsd[258355]: segfault at 50 ip 00007f4803bbf32e sp 00007f47d0ff8210 error 4 in libntirpc.so.5.8[7f4803ba4000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 06 10:05:04 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:05:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:04 compute-0 systemd[1]: Started Process Core Dump (PID 259312/UID 0).
Dec 06 10:05:05 compute-0 ceph-mon[74327]: pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 438 B/s wr, 4 op/s
Dec 06 10:05:06 compute-0 systemd-coredump[259313]: Process 213778 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 86:
                                                    #0  0x00007f4803bbf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:05:06 compute-0 systemd[1]: systemd-coredump@6-259312-0.service: Deactivated successfully.
Dec 06 10:05:06 compute-0 systemd[1]: systemd-coredump@6-259312-0.service: Consumed 1.248s CPU time.
Dec 06 10:05:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:05:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:05:06 compute-0 podman[259322]: 2025-12-06 10:05:06.239862406 +0000 UTC m=+0.027054961 container died 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 10:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bb679519899423a10fd5aec53519d66c5cf90e4dcb5edc1f193a3cb3ab5273-merged.mount: Deactivated successfully.
Dec 06 10:05:06 compute-0 podman[259322]: 2025-12-06 10:05:06.285176318 +0000 UTC m=+0.072368853 container remove 5d860964edcc2ae02d2071e13089b9e2f2642e3853757c3cef05b9c593c1e765 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:06 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:05:06 compute-0 podman[259320]: 2025-12-06 10:05:06.303380619 +0000 UTC m=+0.084486580 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:05:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:06 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:05:06 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.549s CPU time.
Dec 06 10:05:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.250Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:05:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:05:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:07.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:05:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:07 compute-0 ceph-mon[74327]: pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:08 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:08.037 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:05:08 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:08.038 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:05:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:05:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:05:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:09 compute-0 ceph-mon[74327]: pgmap v714: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:05:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100510 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:05:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec 06 10:05:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec 06 10:05:11 compute-0 sudo[259381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:05:11 compute-0 sudo[259381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:11 compute-0 sudo[259381]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:11 compute-0 ceph-mon[74327]: pgmap v715: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:13 compute-0 ceph-mon[74327]: pgmap v716: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:15 compute-0 ceph-mon[74327]: pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:16.040 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:16 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 7.
Dec 06 10:05:16 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:05:16 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.549s CPU time.
Dec 06 10:05:16 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 10:05:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100516 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:05:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:16 compute-0 podman[259461]: 2025-12-06 10:05:16.860622247 +0000 UTC m=+0.055980031 container create cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec 06 10:05:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:16 compute-0 podman[259461]: 2025-12-06 10:05:16.83331576 +0000 UTC m=+0.028673584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:16 compute-0 podman[259461]: 2025-12-06 10:05:16.958152767 +0000 UTC m=+0.153510531 container init cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:05:16 compute-0 podman[259461]: 2025-12-06 10:05:16.963797679 +0000 UTC m=+0.159155423 container start cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:05:16 compute-0 bash[259461]: cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350
Dec 06 10:05:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:16 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 10:05:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:16 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 10:05:16 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:17 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:05:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:17.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:05:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:18 compute-0 ceph-mon[74327]: pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:05:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:18 compute-0 sshd-session[259520]: Connection closed by authenticating user root 45.10.175.77 port 40066 [preauth]
Dec 06 10:05:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:19 compute-0 ceph-mon[74327]: pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:20.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec 06 10:05:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Dec 06 10:05:21 compute-0 ceph-mon[74327]: pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:23 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:05:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:23 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:05:23 compute-0 ceph-mon[74327]: pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:05:23
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', '.rgw.root', 'backups', '.nfs', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control']
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:05:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:05:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:05:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:05:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:25 compute-0 ceph-mon[74327]: pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec 06 10:05:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec 06 10:05:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:26.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:05:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:05:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:27.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:05:27 compute-0 podman[259532]: 2025-12-06 10:05:27.429117218 +0000 UTC m=+0.060693918 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:05:27 compute-0 ceph-mon[74327]: pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec 06 10:05:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:05:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:05:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:29 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:05:29 compute-0 ceph-mon[74327]: pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:05:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:05:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:30 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Dec 06 10:05:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Dec 06 10:05:31 compute-0 sudo[259572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:05:31 compute-0 sudo[259572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:31 compute-0 sudo[259572]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:31 compute-0 ceph-mon[74327]: pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.218 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.219 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.239 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.357 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.357 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.366 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.366 254824 INFO nova.compute.claims [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:05:32 compute-0 podman[259599]: 2025-12-06 10:05:32.448158431 +0000 UTC m=+0.074791538 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.479 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:05:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:32.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:05:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:32 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:05:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348345609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.915 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.924 254824 DEBUG nova.compute.provider_tree [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.947 254824 DEBUG nova.scheduler.client.report [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.971 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:32 compute-0 nova_compute[254819]: 2025-12-06 10:05:32.972 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.027 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.028 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.073 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.122 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.235 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.238 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.239 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating image(s)
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.281 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.328 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.364 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.368 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.369 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.606 254824 DEBUG nova.virt.libvirt.imagebackend [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image locations are: [{'url': 'rbd://5ecd3f74-dade-5fc4-92ce-8950ae424258/images/9489b8a5-a798-4e26-87f9-59bb1eb2e6fd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://5ecd3f74-dade-5fc4-92ce-8950ae424258/images/9489b8a5-a798-4e26-87f9-59bb1eb2e6fd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 10:05:33 compute-0 ceph-mon[74327]: pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 06 10:05:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2348345609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.818 254824 WARNING oslo_policy.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.818 254824 WARNING oslo_policy.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 10:05:33 compute-0 nova_compute[254819]: 2025-12-06 10:05:33.820 254824 DEBUG nova.policy [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:05:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Dec 06 10:05:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:34.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:34 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.031 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.050 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Successfully created port: d4daf2d1-1774-4e84-b69b-60ba95ce1518 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.087 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.089 254824 DEBUG nova.virt.images [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] 9489b8a5-a798-4e26-87f9-59bb1eb2e6fd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.091 254824 DEBUG nova.privsep.utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.092 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.294 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.part /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.299 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:35 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.369 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050.converted --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.371 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.399 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.404 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 06 10:05:35 compute-0 ceph-mon[74327]: pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Dec 06 10:05:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 06 10:05:35 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.752 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.775 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.777 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.778 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:05:35 compute-0 nova_compute[254819]: 2025-12-06 10:05:35.796 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.186 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Successfully updated port: d4daf2d1-1774-4e84-b69b-60ba95ce1518 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.209 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.210 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.210 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:05:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.416 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:05:36 compute-0 podman[259753]: 2025-12-06 10:05:36.461137075 +0000 UTC m=+0.082547027 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:05:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec 06 10:05:36 compute-0 ceph-mon[74327]: osdmap e152: 3 total, 3 up, 3 in
Dec 06 10:05:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG nova.compute.manager [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG nova.compute.manager [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.700 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:05:36 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec 06 10:05:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:36 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.917 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:36 compute-0 nova_compute[254819]: 2025-12-06 10:05:36.994 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.085 254824 DEBUG nova.objects.instance [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.108 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.108 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Ensure instance console log exists: /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.110 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.157 254824 DEBUG nova.network.neutron [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.177 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance network_info: |[{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.178 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.178 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.181 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start _get_guest_xml network_info=[{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.186 254824 WARNING nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.191 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.192 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.198 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.199 254824 DEBUG nova.virt.libvirt.host [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.199 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.200 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.201 254824 DEBUG nova.virt.hardware [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.205 254824 DEBUG nova.privsep.utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.205 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:37.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:05:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:05:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654563727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.659 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.686 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.691 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:37 compute-0 ceph-mon[74327]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec 06 10:05:37 compute-0 ceph-mon[74327]: osdmap e153: 3 total, 3 up, 3 in
Dec 06 10:05:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2654563727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.808 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.853 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.853 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:05:37 compute-0 nova_compute[254819]: 2025-12-06 10:05:37.854 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:05:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82878470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.149 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.151 254824 DEBUG nova.virt.libvirt.vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:05:33Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.151 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.152 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.154 254824 DEBUG nova.objects.instance [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.176 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <uuid>9f4c3de7-de9e-45d5-b170-3469a0bd0959</uuid>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <name>instance-00000001</name>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-1430712907</nova:name>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:05:37</nova:creationTime>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <nova:port uuid="d4daf2d1-1774-4e84-b69b-60ba95ce1518">
Dec 06 10:05:38 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <system>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="serial">9f4c3de7-de9e-45d5-b170-3469a0bd0959</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="uuid">9f4c3de7-de9e-45d5-b170-3469a0bd0959</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </system>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <os>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </os>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <features>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </features>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk">
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config">
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:05:38 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:a5:32:83"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <target dev="tapd4daf2d1-17"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/console.log" append="off"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <video>
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </video>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:05:38 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:05:38 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:05:38 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:05:38 compute-0 nova_compute[254819]: </domain>
Dec 06 10:05:38 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.176 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Preparing to wait for external event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.177 254824 DEBUG nova.virt.libvirt.vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:05:33Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG nova.network.os_vif_util [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.178 254824 DEBUG os_vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.210 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.211 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.212 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.214 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.224 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.225 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.225 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.226 254824 INFO oslo.privsep.daemon [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmph1_9zsm8/privsep.sock']
Dec 06 10:05:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:05:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551840607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.304 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.305 254824 DEBUG nova.network.neutron [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.314 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.323 254824 DEBUG oslo_concurrency.lockutils [req-c7d1b1d9-855d-414b-b808-09f861f642d9 req-25e460c7-d3ec-4c27-9b3d-01552be57518 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:05:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.486 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.487 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.488 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.599 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.599 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.600 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.663 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.696 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.699 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:05:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100538 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.727 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.750 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.797 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/82878470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:05:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/551840607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:38.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:38 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:38.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.904 254824 INFO oslo.privsep.daemon [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Spawned new privsep daemon via rootwrap
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.748 259938 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.755 259938 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.759 259938 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 06 10:05:38 compute-0 nova_compute[254819]: 2025-12-06 10:05:38.760 259938 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259938
Dec 06 10:05:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:05:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.221 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.223 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4daf2d1-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.224 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4daf2d1-17, col_values=(('external_ids', {'iface-id': 'd4daf2d1-1774-4e84-b69b-60ba95ce1518', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:32:83', 'vm-uuid': '9f4c3de7-de9e-45d5-b170-3469a0bd0959'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:39 compute-0 NetworkManager[48882]: <info>  [1765015539.2304] manager: (tapd4daf2d1-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.235 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.243 254824 INFO os_vif [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17')
Dec 06 10:05:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:05:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829121739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.282 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.287 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.343 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:a5:32:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.344 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Using config drive
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.374 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.381 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updated inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.381 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.382 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.403 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:05:39 compute-0 nova_compute[254819]: 2025-12-06 10:05:39.403 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:39 compute-0 ceph-mon[74327]: pgmap v731: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec 06 10:05:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2829121739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.213 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Creating config drive at /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.217 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgc54fy_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.305 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.343 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.344 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.344 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.346 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgc54fy_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.374 254824 DEBUG nova.storage.rbd_utils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.378 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5b4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.399 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.400 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.401 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:05:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.545 254824 DEBUG oslo_concurrency.processutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config 9f4c3de7-de9e-45d5-b170-3469a0bd0959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.546 254824 INFO nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deleting local config drive /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959/disk.config because it was imported into RBD.
Dec 06 10:05:40 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 06 10:05:40 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 06 10:05:40 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 06 10:05:40 compute-0 kernel: tapd4daf2d1-17: entered promiscuous mode
Dec 06 10:05:40 compute-0 NetworkManager[48882]: <info>  [1765015540.7111] manager: (tapd4daf2d1-17): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 06 10:05:40 compute-0 ovn_controller[152417]: 2025-12-06T10:05:40Z|00027|binding|INFO|Claiming lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 for this chassis.
Dec 06 10:05:40 compute-0 ovn_controller[152417]: 2025-12-06T10:05:40Z|00028|binding|INFO|d4daf2d1-1774-4e84-b69b-60ba95ce1518: Claiming fa:16:3e:a5:32:83 10.100.0.14
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.732 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.735 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.755 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:32:83 10.100.0.14'], port_security=['fa:16:3e:a5:32:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9f4c3de7-de9e-45d5-b170-3469a0bd0959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7c9b5ec-d7a8-44ba-8a79-a0a05df423dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83e40234-7108-4b28-a3a7-b2ef4fad45ac, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=d4daf2d1-1774-4e84-b69b-60ba95ce1518) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:05:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.757 162267 INFO neutron.agent.ovn.metadata.agent [-] Port d4daf2d1-1774-4e84-b69b-60ba95ce1518 in datapath 971faad6-f548-4a54-bc9c-3aa3cca72c6f bound to our chassis
Dec 06 10:05:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.760 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 971faad6-f548-4a54-bc9c-3aa3cca72c6f
Dec 06 10:05:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:40.763 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmppprpmqyr/privsep.sock']
Dec 06 10:05:40 compute-0 systemd-udevd[260064]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:05:40 compute-0 NetworkManager[48882]: <info>  [1765015540.7909] device (tapd4daf2d1-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:05:40 compute-0 NetworkManager[48882]: <info>  [1765015540.7917] device (tapd4daf2d1-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:05:40 compute-0 systemd-machined[216202]: New machine qemu-1-instance-00000001.
Dec 06 10:05:40 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.828 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:40 compute-0 ovn_controller[152417]: 2025-12-06T10:05:40Z|00029|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 ovn-installed in OVS
Dec 06 10:05:40 compute-0 ovn_controller[152417]: 2025-12-06T10:05:40Z|00030|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 up in Southbound
Dec 06 10:05:40 compute-0 nova_compute[254819]: 2025-12-06 10:05:40.836 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3018802899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:40 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec 06 10:05:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec 06 10:05:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:40.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:40 compute-0 sshd-session[259987]: Received disconnect from 193.46.255.99 port 56384:11:  [preauth]
Dec 06 10:05:40 compute-0 sshd-session[259987]: Disconnected from authenticating user root 193.46.255.99 port 56384 [preauth]
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.364 254824 DEBUG nova.compute.manager [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.364 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG oslo_concurrency.lockutils [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.365 254824 DEBUG nova.compute.manager [req-2e47052c-98c0-4483-8c48-8137237a8bcc req-72005371-0ee1-4553-89f9-8481d0b35e9b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Processing event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.405 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.4049911, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.405 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Started (Lifecycle Event)
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.407 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.428 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.432 254824 INFO nova.virt.libvirt.driver [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance spawned successfully.
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.432 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.451 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.457 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.461 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.462 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.463 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.463 254824 DEBUG nova.virt.libvirt.driver [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.490 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.491 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.4073138, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.491 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Paused (Lifecycle Event)
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.516 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.521 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015541.409797, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.521 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Resumed (Lifecycle Event)
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.543 254824 INFO nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 8.31 seconds to spawn the instance on the hypervisor.
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.543 254824 DEBUG nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.544 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.550 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.567 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.568 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppprpmqyr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.397 260126 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.402 260126 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.406 260126 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.406 260126 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260126
Dec 06 10:05:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:41.571 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ca09dc-06a6-4b3d-9297-acd6d37daca0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.585 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.605 254824 INFO nova.compute.manager [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 9.29 seconds to build instance.
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.621 254824 DEBUG oslo_concurrency.lockutils [None req-077a8872-63a0-4c44-b143-5dd05fa6825f 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:41 compute-0 nova_compute[254819]: 2025-12-06 10:05:41.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:41 compute-0 ceph-mon[74327]: pgmap v732: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Dec 06 10:05:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2320957764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3766974847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.166 260126 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.167 260126 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.167 260126 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:05:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Dec 06 10:05:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec 06 10:05:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec 06 10:05:42 compute-0 ceph-mon[74327]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec 06 10:05:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:42.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[259476]: 06/12/2025 10:05:42 : epoch 6933ffdc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5a8002b10 fd 39 proxy ignored for local
Dec 06 10:05:42 compute-0 kernel: ganesha.nfsd[259570]: segfault at 50 ip 00007fb67f92232e sp 00007fb6337fd210 error 4 in libntirpc.so.5.8[7fb67f907000+2c000] likely on CPU 3 (core 0, socket 3)
Dec 06 10:05:42 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:05:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3852513152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:05:42 compute-0 ceph-mon[74327]: osdmap e154: 3 total, 3 up, 3 in
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.888 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[34f28924-f297-46ff-8459-15fb59753abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.889 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap971faad6-f1 in ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.892 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap971faad6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.892 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5a538acf-ab2b-4eb9-9818-a57661d4625e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.896 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[723eafe8-c11a-4257-9dde-6171b876a920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:42 compute-0 systemd[1]: Started Process Core Dump (PID 260134/UID 0).
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.928 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aa2210-61e5-4e7e-bbe0-48d7814b60f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.962 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cad93995-b1d0-4f03-9100-1badd7fdfe3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:42.965 162267 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmplxt6y2rc/privsep.sock']
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.443 254824 DEBUG oslo_concurrency.lockutils [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.444 254824 DEBUG nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:05:43 compute-0 nova_compute[254819]: 2025-12-06 10:05:43.444 254824 WARNING nova.compute.manager [req-d5da9121-5fa8-4c66-b7f7-9f60e814632e req-18e06190-aab4-4977-9685-554cccbd7f57 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state active and task_state None.
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.697 162267 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.698 162267 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplxt6y2rc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.535 260145 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.540 260145 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.543 260145 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.544 260145 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260145
Dec 06 10:05:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:43.701 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[627293ea-b333-4c4d-ae91-a70579c39528]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:44 compute-0 nova_compute[254819]: 2025-12-06 10:05:44.230 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.242 260145 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec 06 10:05:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:44.935 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e3c80a-2006-449b-86f2-b352e1168717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.107 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a9c466-123b-42d1-8b4c-094a8b804267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.1087] manager: (tap971faad6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec 06 10:05:45 compute-0 systemd-udevd[260157]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.159 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[cf008c7e-ad7b-41dc-99a5-ed0f5c8a0b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.164 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca46fab-577e-49cb-bcaa-387455a89511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2063] device (tap971faad6-f0): carrier: link connected
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.217 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf903e1-719a-4fae-9efb-f9686f4cb7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.239 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[234e443e-8485-4373-a539-2717a19bdf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap971faad6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:87:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391569, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260175, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2457] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2461] device (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.244 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2472] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2475] device (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2483] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2489] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2494] device (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.2499] device (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.262 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[790452b7-a657-4fbf-84dc-77fbbc046aed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:8710'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 391569, 'tstamp': 391569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260177, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.265 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.269 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.285 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f37a421c-e222-4500-92ab-79ea49957054]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap971faad6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:87:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391569, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260179, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.307 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.328 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6d206e-0745-43ee-a67c-b280e3d44c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.404 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[91b5a33b-4edc-49c8-98c9-86b19407af9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.406 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971faad6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.407 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.408 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap971faad6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 kernel: tap971faad6-f0: entered promiscuous mode
Dec 06 10:05:45 compute-0 NetworkManager[48882]: <info>  [1765015545.4105] manager: (tap971faad6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec 06 10:05:45 compute-0 ceph-mon[74327]: pgmap v733: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.827 254824 DEBUG nova.compute.manager [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.828 254824 DEBUG nova.compute.manager [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.828 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap971faad6-f0, col_values=(('external_ids', {'iface-id': '5fb89a54-8c63-4d33-bca3-d7130382f3f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.829 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.829 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.830 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:05:45 compute-0 ovn_controller[152417]: 2025-12-06T10:05:45Z|00031|binding|INFO|Releasing lport 5fb89a54-8c63-4d33-bca3-d7130382f3f8 from this chassis (sb_readonly=0)
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.831 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 nova_compute[254819]: 2025-12-06 10:05:45.858 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.860 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.861 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[202f5845-401e-413f-85ba-2f5e3fc0e1df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.863 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-971faad6-f548-4a54-bc9c-3aa3cca72c6f
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/971faad6-f548-4a54-bc9c-3aa3cca72c6f.pid.haproxy
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID 971faad6-f548-4a54-bc9c-3aa3cca72c6f
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:05:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:45.864 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'env', 'PROCESS_TAG=haproxy-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/971faad6-f548-4a54-bc9c-3aa3cca72c6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:05:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:05:46 compute-0 podman[260212]: 2025-12-06 10:05:46.352322641 +0000 UTC m=+0.030821172 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:05:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:46.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:46 compute-0 systemd-coredump[260137]: Process 259480 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007fb67f92232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:05:46 compute-0 ceph-mon[74327]: pgmap v735: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec 06 10:05:46 compute-0 podman[260212]: 2025-12-06 10:05:46.964339795 +0000 UTC m=+0.642838526 container create 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:05:47 compute-0 systemd[1]: Started libpod-conmon-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope.
Dec 06 10:05:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:47 compute-0 systemd[1]: systemd-coredump@7-260134-0.service: Deactivated successfully.
Dec 06 10:05:47 compute-0 systemd[1]: systemd-coredump@7-260134-0.service: Consumed 1.230s CPU time.
Dec 06 10:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46aa1eb56c671473c7b08a45b3cc7be7a0d7e60ad9f8373b5056483f751a6f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:47 compute-0 podman[260212]: 2025-12-06 10:05:47.066434069 +0000 UTC m=+0.744932570 container init 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:05:47 compute-0 podman[260212]: 2025-12-06 10:05:47.071844185 +0000 UTC m=+0.750342686 container start 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:47 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : New worker (260247) forked
Dec 06 10:05:47 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : Loading success.
Dec 06 10:05:47 compute-0 podman[260232]: 2025-12-06 10:05:47.110266211 +0000 UTC m=+0.040437251 container died cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-33760f1e5dff0f58c4ebac2793030140ffc34f481b06aa408ec990465208878b-merged.mount: Deactivated successfully.
Dec 06 10:05:47 compute-0 podman[260232]: 2025-12-06 10:05:47.152608372 +0000 UTC m=+0.082779382 container remove cb12feac15a0669dd612ec520b2008fd4691d61a8859fee5c73829837afae350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 10:05:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:05:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:47.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:05:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:47.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:05:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:05:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.579s CPU time.
Dec 06 10:05:47 compute-0 nova_compute[254819]: 2025-12-06 10:05:47.393 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:05:47 compute-0 nova_compute[254819]: 2025-12-06 10:05:47.394 254824 DEBUG nova.network.neutron [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:05:47 compute-0 nova_compute[254819]: 2025-12-06 10:05:47.440 254824 DEBUG oslo_concurrency.lockutils [req-12f9ebcf-26e3-4b6e-9648-4030d5783a5a req-f1f46f9e-23ca-4f30-a26a-4a88233f03bc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:05:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:47 compute-0 ceph-mon[74327]: pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:05:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/850046515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:05:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/850046515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:05:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:48 compute-0 ceph-mon[74327]: pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:49 compute-0 nova_compute[254819]: 2025-12-06 10:05:49.233 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:50 compute-0 sudo[260298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:50 compute-0 sudo[260298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:50 compute-0 sudo[260298]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:50 compute-0 sudo[260323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 10:05:50 compute-0 sudo[260323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:50 compute-0 nova_compute[254819]: 2025-12-06 10:05:50.308 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:50 compute-0 podman[260418]: 2025-12-06 10:05:50.78830304 +0000 UTC m=+0.072417114 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:05:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:50 compute-0 podman[260418]: 2025-12-06 10:05:50.896927899 +0000 UTC m=+0.181041953 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:05:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec 06 10:05:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48379 "" "Prometheus/2.51.0"
Dec 06 10:05:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:51 compute-0 sudo[260511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:05:51 compute-0 sudo[260511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:51 compute-0 sudo[260511]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:51 compute-0 podman[260562]: 2025-12-06 10:05:51.505711757 +0000 UTC m=+0.058142259 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:51 compute-0 ceph-mon[74327]: pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:51 compute-0 podman[260562]: 2025-12-06 10:05:51.520288651 +0000 UTC m=+0.072719183 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:52 compute-0 podman[260701]: 2025-12-06 10:05:52.257234884 +0000 UTC m=+0.086432632 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:05:52 compute-0 podman[260701]: 2025-12-06 10:05:52.292905376 +0000 UTC m=+0.122103024 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:05:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:52 compute-0 podman[260767]: 2025-12-06 10:05:52.555137778 +0000 UTC m=+0.056847794 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=)
Dec 06 10:05:52 compute-0 podman[260767]: 2025-12-06 10:05:52.566877934 +0000 UTC m=+0.068587950 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 06 10:05:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.792287) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552792329, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1050, "num_deletes": 256, "total_data_size": 1696934, "memory_usage": 1722496, "flush_reason": "Manual Compaction"}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552805630, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1680401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22708, "largest_seqno": 23757, "table_properties": {"data_size": 1675158, "index_size": 2703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11296, "raw_average_key_size": 19, "raw_value_size": 1664427, "raw_average_value_size": 2864, "num_data_blocks": 118, "num_entries": 581, "num_filter_entries": 581, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015477, "oldest_key_time": 1765015477, "file_creation_time": 1765015552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13417 microseconds, and 4424 cpu microseconds.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.805698) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1680401 bytes OK
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.805728) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810148) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810174) EVENT_LOG_v1 {"time_micros": 1765015552810165, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1692004, prev total WAL file size 1692004, number of live WAL files 2.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810871) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1641KB)], [50(11MB)]
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552810994, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13896549, "oldest_snapshot_seqno": -1}
Dec 06 10:05:52 compute-0 podman[260831]: 2025-12-06 10:05:52.838559291 +0000 UTC m=+0.074994744 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:05:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:05:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100552 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5362 keys, 13714498 bytes, temperature: kUnknown
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552908015, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13714498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13678274, "index_size": 21714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137272, "raw_average_key_size": 25, "raw_value_size": 13580734, "raw_average_value_size": 2532, "num_data_blocks": 884, "num_entries": 5362, "num_filter_entries": 5362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.908364) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13714498 bytes
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.910290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.7 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(16.4) write-amplify(8.2) OK, records in: 5896, records dropped: 534 output_compression: NoCompression
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.910309) EVENT_LOG_v1 {"time_micros": 1765015552910300, "job": 26, "event": "compaction_finished", "compaction_time_micros": 97120, "compaction_time_cpu_micros": 41865, "output_level": 6, "num_output_files": 1, "total_output_size": 13714498, "num_input_records": 5896, "num_output_records": 5362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552910740, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015552913072, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.810740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:05:52.913196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:52 compute-0 podman[260861]: 2025-12-06 10:05:52.952552146 +0000 UTC m=+0.072951989 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:52 compute-0 podman[260831]: 2025-12-06 10:05:52.958638299 +0000 UTC m=+0.195073723 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:53 compute-0 podman[260904]: 2025-12-06 10:05:53.19040831 +0000 UTC m=+0.063381000 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:05:53 compute-0 podman[260904]: 2025-12-06 10:05:53.401138303 +0000 UTC m=+0.274111013 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:05:53 compute-0 ceph-mon[74327]: pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 10:05:53 compute-0 podman[261014]: 2025-12-06 10:05:53.842453454 +0000 UTC m=+0.054927552 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:53 compute-0 podman[261014]: 2025-12-06 10:05:53.886910363 +0000 UTC m=+0.099384441 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:05:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:53 compute-0 sudo[260323]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:05:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:05:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:05:54 compute-0 sudo[261056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:54 compute-0 sudo[261056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:54 compute-0 sudo[261056]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:54 compute-0 sudo[261081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:05:54 compute-0 sudo[261081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:54 compute-0 ovn_controller[152417]: 2025-12-06T10:05:54Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:32:83 10.100.0.14
Dec 06 10:05:54 compute-0 ovn_controller[152417]: 2025-12-06T10:05:54Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:32:83 10.100.0.14
Dec 06 10:05:54 compute-0 nova_compute[254819]: 2025-12-06 10:05:54.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.238 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:05:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 06 10:05:54 compute-0 sudo[261081]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:05:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:05:54 compute-0 sudo[261137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:54 compute-0 sudo[261137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:54 compute-0 sudo[261137]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:54.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:54 compute-0 sudo[261162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:05:54 compute-0 sudo[261162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:54 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:05:55 compute-0 nova_compute[254819]: 2025-12-06 10:05:55.312 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.45534457 +0000 UTC m=+0.048544659 container create df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 10:05:55 compute-0 systemd[1]: Started libpod-conmon-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope.
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.433030749 +0000 UTC m=+0.026230848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.564172175 +0000 UTC m=+0.157372264 container init df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.572284264 +0000 UTC m=+0.165484343 container start df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.576302362 +0000 UTC m=+0.169502461 container attach df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 10:05:55 compute-0 stupefied_liskov[261246]: 167 167
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.58027223 +0000 UTC m=+0.173472289 container died df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:05:55 compute-0 systemd[1]: libpod-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope: Deactivated successfully.
Dec 06 10:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6162db8090cf4fe73870df4b1f5ba78d5770cfcd495a1cf27e3d48d47958e226-merged.mount: Deactivated successfully.
Dec 06 10:05:55 compute-0 podman[261228]: 2025-12-06 10:05:55.649368353 +0000 UTC m=+0.242568412 container remove df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:05:55 compute-0 systemd[1]: libpod-conmon-df1887ed2846fa0f575c845213dcbb7c1c1a1dd723680c6559fcde8cf4df70bc.scope: Deactivated successfully.
Dec 06 10:05:55 compute-0 ceph-mon[74327]: pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 06 10:05:55 compute-0 ceph-mon[74327]: pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec 06 10:05:55 compute-0 ceph-mon[74327]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:05:55 compute-0 podman[261270]: 2025-12-06 10:05:55.834578448 +0000 UTC m=+0.057881322 container create 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:05:55 compute-0 systemd[1]: Started libpod-conmon-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope.
Dec 06 10:05:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:55 compute-0 podman[261270]: 2025-12-06 10:05:55.806526542 +0000 UTC m=+0.029829476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:55 compute-0 podman[261270]: 2025-12-06 10:05:55.913009384 +0000 UTC m=+0.136312258 container init 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:05:55 compute-0 podman[261270]: 2025-12-06 10:05:55.921935914 +0000 UTC m=+0.145238758 container start 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:05:55 compute-0 podman[261270]: 2025-12-06 10:05:55.924842872 +0000 UTC m=+0.148145716 container attach 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 10:05:56 compute-0 vigorous_raman[261287]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:05:56 compute-0 vigorous_raman[261287]: --> All data devices are unavailable
Dec 06 10:05:56 compute-0 systemd[1]: libpod-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope: Deactivated successfully.
Dec 06 10:05:56 compute-0 podman[261270]: 2025-12-06 10:05:56.360273025 +0000 UTC m=+0.583575889 container died 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf3146f5d122286e4545e112f7fa30c2d93c0a0e1f24bef7f836c497986d5b49-merged.mount: Deactivated successfully.
Dec 06 10:05:56 compute-0 podman[261270]: 2025-12-06 10:05:56.406129532 +0000 UTC m=+0.629432376 container remove 2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 10:05:56 compute-0 systemd[1]: libpod-conmon-2a6b206042f4cf78acbd912dcd7d3c65d3c74d36983ea59e047e38194fa03ae5.scope: Deactivated successfully.
Dec 06 10:05:56 compute-0 sudo[261162]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 compute-0 sudo[261316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:56 compute-0 sudo[261316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:56 compute-0 sudo[261316]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 compute-0 sudo[261341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:05:56 compute-0 sudo[261341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec 06 10:05:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:56.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:05:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.046807269 +0000 UTC m=+0.051977052 container create f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:05:57 compute-0 systemd[1]: Started libpod-conmon-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope.
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.028741122 +0000 UTC m=+0.033910935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.144148204 +0000 UTC m=+0.149318017 container init f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.156108627 +0000 UTC m=+0.161278450 container start f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:05:57 compute-0 xenodochial_shirley[261423]: 167 167
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.165916301 +0000 UTC m=+0.171086134 container attach f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 10:05:57 compute-0 systemd[1]: libpod-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope: Deactivated successfully.
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.16697479 +0000 UTC m=+0.172144593 container died f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 10:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c60764e23f8dd7aa2a260b374ffccebd588ec8bef7b228a814907be676f388c6-merged.mount: Deactivated successfully.
Dec 06 10:05:57 compute-0 podman[261406]: 2025-12-06 10:05:57.22296858 +0000 UTC m=+0.228138383 container remove f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shirley, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:05:57 compute-0 systemd[1]: libpod-conmon-f17d0d2b46c8f9fa31925587b4c0b51288a90a9fc21e4547c45b8a061a726c48.scope: Deactivated successfully.
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.257Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.257Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:05:57.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:05:57 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 8.
Dec 06 10:05:57 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:05:57 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.579s CPU time.
Dec 06 10:05:57 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.458228034 +0000 UTC m=+0.068238141 container create df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 10:05:57 compute-0 systemd[1]: Started libpod-conmon-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope.
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.422257784 +0000 UTC m=+0.032267991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.542381954 +0000 UTC m=+0.152392081 container init df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.551984403 +0000 UTC m=+0.161994500 container start df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.560275597 +0000 UTC m=+0.170285754 container attach df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 10:05:57 compute-0 podman[261482]: 2025-12-06 10:05:57.568011145 +0000 UTC m=+0.072205518 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:05:57 compute-0 podman[261534]: 2025-12-06 10:05:57.676259104 +0000 UTC m=+0.062117246 container create 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:05:57 compute-0 podman[261534]: 2025-12-06 10:05:57.646098251 +0000 UTC m=+0.031956413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:57 compute-0 podman[261534]: 2025-12-06 10:05:57.768780269 +0000 UTC m=+0.154638401 container init 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:05:57 compute-0 podman[261534]: 2025-12-06 10:05:57.773397894 +0000 UTC m=+0.159256006 container start 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:05:57 compute-0 bash[261534]: 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 10:05:57 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 10:05:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:05:57 compute-0 ceph-mon[74327]: pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 9 op/s
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 10:05:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:05:57 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:05:57 compute-0 cool_allen[261489]: {
Dec 06 10:05:57 compute-0 cool_allen[261489]:     "1": [
Dec 06 10:05:57 compute-0 cool_allen[261489]:         {
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "devices": [
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "/dev/loop3"
Dec 06 10:05:57 compute-0 cool_allen[261489]:             ],
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "lv_name": "ceph_lv0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "lv_size": "21470642176",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "name": "ceph_lv0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "tags": {
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.cluster_name": "ceph",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.crush_device_class": "",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.encrypted": "0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.osd_id": "1",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.type": "block",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.vdo": "0",
Dec 06 10:05:57 compute-0 cool_allen[261489]:                 "ceph.with_tpm": "0"
Dec 06 10:05:57 compute-0 cool_allen[261489]:             },
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "type": "block",
Dec 06 10:05:57 compute-0 cool_allen[261489]:             "vg_name": "ceph_vg0"
Dec 06 10:05:57 compute-0 cool_allen[261489]:         }
Dec 06 10:05:57 compute-0 cool_allen[261489]:     ]
Dec 06 10:05:57 compute-0 cool_allen[261489]: }
Dec 06 10:05:57 compute-0 systemd[1]: libpod-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope: Deactivated successfully.
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.912953087 +0000 UTC m=+0.522963184 container died df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cb0fdaa6da5aac2de4d6525e1ade05771d8ddabfc549185b32eb6b1e37d9d05-merged.mount: Deactivated successfully.
Dec 06 10:05:57 compute-0 podman[261448]: 2025-12-06 10:05:57.963671586 +0000 UTC m=+0.573681683 container remove df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:05:57 compute-0 systemd[1]: libpod-conmon-df48eafb00aa37020507812da5f939d714c9c2fd365c30eb073cd54ee3069558.scope: Deactivated successfully.
Dec 06 10:05:58 compute-0 sudo[261341]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:58 compute-0 sudo[261607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:58 compute-0 sudo[261607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:58 compute-0 sudo[261607]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:58 compute-0 sudo[261632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:05:58 compute-0 sudo[261632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:05:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.761939583 +0000 UTC m=+0.072358642 container create d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:58 compute-0 systemd[1]: Started libpod-conmon-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope.
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.737450493 +0000 UTC m=+0.047869652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.872004141 +0000 UTC m=+0.182423220 container init d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:05:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:05:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.882036581 +0000 UTC m=+0.192455640 container start d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.885453034 +0000 UTC m=+0.195872113 container attach d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 10:05:58 compute-0 distracted_almeida[261716]: 167 167
Dec 06 10:05:58 compute-0 systemd[1]: libpod-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope: Deactivated successfully.
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.892655068 +0000 UTC m=+0.203074127 container died d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e838d55b9b630dfd74920374842b9b23c38ad51a91b5482d904069421e07d8-merged.mount: Deactivated successfully.
Dec 06 10:05:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:05:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:05:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:05:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:05:58 compute-0 podman[261700]: 2025-12-06 10:05:58.945607886 +0000 UTC m=+0.256026945 container remove d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_almeida, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:05:58 compute-0 systemd[1]: libpod-conmon-d547a272b40dcf4f338a027a2f6d5a8897d0b2511731d30b2628812db902de6a.scope: Deactivated successfully.
Dec 06 10:05:59 compute-0 podman[261740]: 2025-12-06 10:05:59.141601761 +0000 UTC m=+0.051034397 container create 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:05:59 compute-0 systemd[1]: Started libpod-conmon-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope.
Dec 06 10:05:59 compute-0 podman[261740]: 2025-12-06 10:05:59.119455414 +0000 UTC m=+0.028888090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:05:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:05:59 compute-0 podman[261740]: 2025-12-06 10:05:59.238226448 +0000 UTC m=+0.147659164 container init 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:05:59 compute-0 nova_compute[254819]: 2025-12-06 10:05:59.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:05:59 compute-0 podman[261740]: 2025-12-06 10:05:59.249312747 +0000 UTC m=+0.158745403 container start 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:05:59 compute-0 podman[261740]: 2025-12-06 10:05:59.253031757 +0000 UTC m=+0.162464393 container attach 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:05:59 compute-0 ceph-mon[74327]: pgmap v743: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:05:59 compute-0 lvm[261832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:05:59 compute-0 lvm[261832]: VG ceph_vg0 finished
Dec 06 10:06:00 compute-0 sleepy_lamarr[261756]: {}
Dec 06 10:06:00 compute-0 systemd[1]: libpod-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Deactivated successfully.
Dec 06 10:06:00 compute-0 systemd[1]: libpod-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Consumed 1.411s CPU time.
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.127 254824 INFO nova.compute.manager [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Get console output
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.136 254824 INFO oslo.privsep.daemon [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp97bp4g8e/privsep.sock']
Dec 06 10:06:00 compute-0 podman[261836]: 2025-12-06 10:06:00.162176764 +0000 UTC m=+0.045616881 container died 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 06 10:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ec036c5ceb541843194cb4af76f2b8990bfd2b07030e6c03ed4f470bd972c8f-merged.mount: Deactivated successfully.
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.313 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:00 compute-0 podman[261836]: 2025-12-06 10:06:00.34779846 +0000 UTC m=+0.231238527 container remove 5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:06:00 compute-0 systemd[1]: libpod-conmon-5e77d9746a2f76903844fc35974424670a3db524a1ea765fd6e31e7613d769a1.scope: Deactivated successfully.
Dec 06 10:06:00 compute-0 sudo[261632]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:06:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:06:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:00 compute-0 sudo[261856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:00 compute-0 sudo[261856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:06:00 compute-0 sudo[261856]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.875 254824 INFO oslo.privsep.daemon [None req-c381409c-f4e1-4670-9fe8-eae9c687de24 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Spawned new privsep daemon via rootwrap
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.740 261881 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.747 261881 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.753 261881 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.754 261881 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261881
Dec 06 10:06:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec 06 10:06:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48383 "" "Prometheus/2.51.0"
Dec 06 10:06:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:00.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:00 compute-0 nova_compute[254819]: 2025-12-06 10:06:00.983 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:06:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:01 compute-0 ceph-mon[74327]: pgmap v744: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:06:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:06:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:02.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:03 compute-0 podman[261886]: 2025-12-06 10:06:03.526310197 +0000 UTC m=+0.139834562 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:06:03 compute-0 ceph-mon[74327]: pgmap v745: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Dec 06 10:06:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:03 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:06:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:03 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:06:04 compute-0 nova_compute[254819]: 2025-12-06 10:06:04.246 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Dec 06 10:06:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:04.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:05 compute-0 nova_compute[254819]: 2025-12-06 10:06:05.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:05 compute-0 ceph-mon[74327]: pgmap v746: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Dec 06 10:06:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:06:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:06.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:07.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:07 compute-0 podman[261916]: 2025-12-06 10:06:07.438179043 +0000 UTC m=+0.063077152 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:06:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:07 compute-0 ceph-mon[74327]: pgmap v747: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:06:08 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:08.104 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:06:08 compute-0 nova_compute[254819]: 2025-12-06 10:06:08.104 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:08 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:08.105 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:06:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:06:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:08.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 06 10:06:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:06:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:09 compute-0 nova_compute[254819]: 2025-12-06 10:06:09.249 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:09 compute-0 ceph-mon[74327]: pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:06:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:06:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 10:06:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:09 : epoch 69340005 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:06:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44a0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:10 compute-0 nova_compute[254819]: 2025-12-06 10:06:10.318 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4498001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:06:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:10 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:06:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:06:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:10.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:11 compute-0 sudo[261954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:06:11 compute-0 sudo[261954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:06:11 compute-0 sudo[261954]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:11 compute-0 ceph-mon[74327]: pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:06:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:06:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100612 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:06:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:12 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:12.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:13 compute-0 ceph-mon[74327]: pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:06:13 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4031308316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:14 compute-0 nova_compute[254819]: 2025-12-06 10:06:14.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 18 KiB/s wr, 4 op/s
Dec 06 10:06:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:14 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:14.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:15.107 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:06:15 compute-0 nova_compute[254819]: 2025-12-06 10:06:15.371 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:15 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 10:06:15 compute-0 ceph-mon[74327]: pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 18 KiB/s wr, 4 op/s
Dec 06 10:06:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Dec 06 10:06:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:16 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:16.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:17.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:17 compute-0 ceph-mon[74327]: pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Dec 06 10:06:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 10:06:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:18 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:19 compute-0 nova_compute[254819]: 2025-12-06 10:06:19.254 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:19 compute-0 ceph-mon[74327]: pgmap v753: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 10:06:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:20 compute-0 nova_compute[254819]: 2025-12-06 10:06:20.373 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:20 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:06:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:06:20 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1556517146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:06:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:20.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:21 compute-0 ceph-mon[74327]: pgmap v754: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:21 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3225489691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:06:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:22 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:22 compute-0 ceph-mon[74327]: pgmap v755: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:06:23
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs', 'vms', 'images', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control']
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:06:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:06:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011057152275835123 of space, bias 1.0, pg target 0.3317145682750537 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:06:24 compute-0 nova_compute[254819]: 2025-12-06 10:06:24.257 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:06:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:24 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:24 compute-0 ceph-mon[74327]: pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:25 compute-0 nova_compute[254819]: 2025-12-06 10:06:25.375 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:26 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:26.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:26.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:27.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:27 compute-0 ceph-mon[74327]: pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 10:06:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44700032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:28 compute-0 podman[261996]: 2025-12-06 10:06:28.43467583 +0000 UTC m=+0.064632214 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 10:06:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 06 10:06:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:28 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:28.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:28 compute-0 ceph-mon[74327]: pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 06 10:06:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:06:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:06:29 compute-0 nova_compute[254819]: 2025-12-06 10:06:29.276 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:30 compute-0 nova_compute[254819]: 2025-12-06 10:06:30.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:30 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:06:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:06:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:30.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:30.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:31 compute-0 sudo[262021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:06:31 compute-0 sudo[262021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:06:31 compute-0 sudo[262021]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:31 compute-0 ceph-mon[74327]: pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44980023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:32 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4470003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:06:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:33 compute-0 nova_compute[254819]: 2025-12-06 10:06:33.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:33 compute-0 ceph-mon[74327]: pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:34 compute-0 nova_compute[254819]: 2025-12-06 10:06:34.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[261550]: 06/12/2025 10:06:34 : epoch 69340005 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f447c003730 fd 38 proxy ignored for local
Dec 06 10:06:34 compute-0 kernel: ganesha.nfsd[261944]: segfault at 50 ip 00007f454c3ee32e sp 00007f4518ff8210 error 4 in libntirpc.so.5.8[7f454c3d3000+2c000] likely on CPU 0 (core 0, socket 0)
Dec 06 10:06:34 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:06:34 compute-0 systemd[1]: Started Process Core Dump (PID 262049/UID 0).
Dec 06 10:06:34 compute-0 podman[262048]: 2025-12-06 10:06:34.395347146 +0000 UTC m=+0.085837095 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:06:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:06:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:06:35 compute-0 nova_compute[254819]: 2025-12-06 10:06:35.434 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:35 compute-0 systemd-coredump[262055]: Process 261556 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007f454c3ee32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:06:35 compute-0 systemd[1]: systemd-coredump@8-262049-0.service: Deactivated successfully.
Dec 06 10:06:35 compute-0 systemd[1]: systemd-coredump@8-262049-0.service: Consumed 1.115s CPU time.
Dec 06 10:06:35 compute-0 podman[262082]: 2025-12-06 10:06:35.88504169 +0000 UTC m=+0.026350271 container died 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:06:36 compute-0 ceph-mon[74327]: pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d9152d1c4e28a2f475bd786475ef6ecf46d90f6ad0d9809f534e4818d75aaf-merged.mount: Deactivated successfully.
Dec 06 10:06:36 compute-0 podman[262082]: 2025-12-06 10:06:36.108945418 +0000 UTC m=+0.250253999 container remove 9c07cd8f5a4cefc3df35c5c289279dafc1d082a8f635dd4ffda3a0fb0dfa9d8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:06:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:06:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:06:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.448s CPU time.
Dec 06 10:06:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:36.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:37 compute-0 ceph-mon[74327]: pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:06:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:37.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:38 compute-0 podman[262126]: 2025-12-06 10:06:38.415575254 +0000 UTC m=+0.047969266 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:06:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:06:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:06:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.280 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.780 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.781 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:06:39 compute-0 nova_compute[254819]: 2025-12-06 10:06:39.782 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:06:39 compute-0 ceph-mon[74327]: pgmap v763: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:06:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:06:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103758457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.236 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.309 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.309 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.471 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.483 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.484 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4448MB free_disk=59.897621154785156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.485 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.574 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.575 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.575 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:06:40 compute-0 nova_compute[254819]: 2025-12-06 10:06:40.675 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:06:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 06 10:06:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/172757125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1103758457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1754887358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100640 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:06:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:06:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:06:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:40.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:41.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:06:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975280571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:41 compute-0 nova_compute[254819]: 2025-12-06 10:06:41.145 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:06:41 compute-0 nova_compute[254819]: 2025-12-06 10:06:41.152 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:06:41 compute-0 nova_compute[254819]: 2025-12-06 10:06:41.171 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:06:41 compute-0 nova_compute[254819]: 2025-12-06 10:06:41.202 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:06:41 compute-0 nova_compute[254819]: 2025-12-06 10:06:41.203 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:41 compute-0 ceph-mon[74327]: pgmap v764: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 06 10:06:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1975280571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.197 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.221 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.222 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.222 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.446 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:06:42 compute-0 nova_compute[254819]: 2025-12-06 10:06:42.447 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:06:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 06 10:06:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:42.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:43.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.781 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.797 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.798 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:43 compute-0 nova_compute[254819]: 2025-12-06 10:06:43.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:43 compute-0 ceph-mon[74327]: pgmap v765: 337 pgs: 337 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 06 10:06:44 compute-0 nova_compute[254819]: 2025-12-06 10:06:44.283 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 06 10:06:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3034468691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:06:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:06:45 compute-0 nova_compute[254819]: 2025-12-06 10:06:45.345 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:45 compute-0 nova_compute[254819]: 2025-12-06 10:06:45.474 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:46 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 9.
Dec 06 10:06:46 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:06:46 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.448s CPU time.
Dec 06 10:06:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:46.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:47.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 06 10:06:47 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 10:06:47 compute-0 ceph-mon[74327]: pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 06 10:06:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/490054839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2091989865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:06:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:06:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:06:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:06:47 compute-0 podman[262251]: 2025-12-06 10:06:47.547203276 +0000 UTC m=+0.022871908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:06:47 compute-0 podman[262251]: 2025-12-06 10:06:47.68010306 +0000 UTC m=+0.155771662 container create f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:06:47 compute-0 podman[262251]: 2025-12-06 10:06:47.749124041 +0000 UTC m=+0.224792723 container init f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:06:47 compute-0 podman[262251]: 2025-12-06 10:06:47.754116476 +0000 UTC m=+0.229785108 container start f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:06:47 compute-0 bash[262251]: f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 10:06:47 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 10:06:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 10:06:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:47 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:06:48 compute-0 ceph-mon[74327]: pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 06 10:06:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:06:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3518606672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:06:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 06 10:06:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:49 compute-0 nova_compute[254819]: 2025-12-06 10:06:49.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:49 compute-0 ceph-mon[74327]: pgmap v768: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 06 10:06:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:06:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3586066368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3586066368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:50 compute-0 nova_compute[254819]: 2025-12-06 10:06:50.476 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec 06 10:06:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:06:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:06:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:51 compute-0 ceph-mon[74327]: pgmap v769: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec 06 10:06:51 compute-0 sudo[262313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:06:51 compute-0 sudo[262313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:06:51 compute-0 sudo[262313]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:52 compute-0 ovn_controller[152417]: 2025-12-06T10:06:52Z|00032|binding|INFO|Releasing lport 5fb89a54-8c63-4d33-bca3-d7130382f3f8 from this chassis (sb_readonly=0)
Dec 06 10:06:52 compute-0 nova_compute[254819]: 2025-12-06 10:06:52.516 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec 06 10:06:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.347 254824 DEBUG nova.compute.manager [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.347 254824 DEBUG nova.compute.manager [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing instance network info cache due to event network-changed-d4daf2d1-1774-4e84-b69b-60ba95ce1518. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.348 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Refreshing network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.442 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.443 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.443 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.444 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.444 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.446 254824 INFO nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Terminating instance
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.447 254824 DEBUG nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:06:53 compute-0 kernel: tapd4daf2d1-17 (unregistering): left promiscuous mode
Dec 06 10:06:53 compute-0 NetworkManager[48882]: <info>  [1765015613.5091] device (tapd4daf2d1-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.523 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 ovn_controller[152417]: 2025-12-06T10:06:53Z|00033|binding|INFO|Releasing lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 from this chassis (sb_readonly=0)
Dec 06 10:06:53 compute-0 ovn_controller[152417]: 2025-12-06T10:06:53Z|00034|binding|INFO|Setting lport d4daf2d1-1774-4e84-b69b-60ba95ce1518 down in Southbound
Dec 06 10:06:53 compute-0 ovn_controller[152417]: 2025-12-06T10:06:53Z|00035|binding|INFO|Removing iface tapd4daf2d1-17 ovn-installed in OVS
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.527 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.532 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:32:83 10.100.0.14'], port_security=['fa:16:3e:a5:32:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9f4c3de7-de9e-45d5-b170-3469a0bd0959', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7c9b5ec-d7a8-44ba-8a79-a0a05df423dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83e40234-7108-4b28-a3a7-b2ef4fad45ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=d4daf2d1-1774-4e84-b69b-60ba95ce1518) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.534 162267 INFO neutron.agent.ovn.metadata.agent [-] Port d4daf2d1-1774-4e84-b69b-60ba95ce1518 in datapath 971faad6-f548-4a54-bc9c-3aa3cca72c6f unbound from our chassis
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.535 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 971faad6-f548-4a54-bc9c-3aa3cca72c6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.536 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4ab85e-8a3f-4d2d-b735-7461844b8433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.537 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f namespace which is not needed anymore
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.546 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 06 10:06:53 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.686s CPU time.
Dec 06 10:06:53 compute-0 systemd-machined[216202]: Machine qemu-1-instance-00000001 terminated.
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.690 254824 INFO nova.virt.libvirt.driver [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Instance destroyed successfully.
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.691 254824 DEBUG nova.objects.instance [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 9f4c3de7-de9e-45d5-b170-3469a0bd0959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.703 254824 DEBUG nova.virt.libvirt.vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430712907',display_name='tempest-TestNetworkBasicOps-server-1430712907',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430712907',id=1,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAfMPOvgHaRlqGgLXkto0FcIKRTuQseDyB3UM7MdJ4qc4V82jaOJG1wyoIF6xrRvoJcXVE+RFVPueMCiHrP5rYBgCoIkNmahi09ifuS6NMzBYr/VB4Uf4Lhhp6Gu2WU0Q==',key_name='tempest-TestNetworkBasicOps-1259992561',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:05:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-m1904u1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:05:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=9f4c3de7-de9e-45d5-b170-3469a0bd0959,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.704 254824 DEBUG nova.network.os_vif_util [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.705 254824 DEBUG nova.network.os_vif_util [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.705 254824 DEBUG os_vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:06:53 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : haproxy version is 2.8.14-c23fe91
Dec 06 10:06:53 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [NOTICE]   (260240) : path to executable is /usr/sbin/haproxy
Dec 06 10:06:53 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [WARNING]  (260240) : Exiting Master process...
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.708 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.708 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4daf2d1-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:06:53 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [ALERT]    (260240) : Current worker (260247) exited with code 143 (Terminated)
Dec 06 10:06:53 compute-0 neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f[260226]: [WARNING]  (260240) : All workers exited. Exiting... (0)
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.711 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 systemd[1]: libpod-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope: Deactivated successfully.
Dec 06 10:06:53 compute-0 conmon[260226]: conmon 21554fb920b8cd6e7729 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope/container/memory.events
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.714 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 podman[262365]: 2025-12-06 10:06:53.71861832 +0000 UTC m=+0.063155684 container died 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.720 254824 INFO os_vif [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:32:83,bridge_name='br-int',has_traffic_filtering=True,id=d4daf2d1-1774-4e84-b69b-60ba95ce1518,network=Network(971faad6-f548-4a54-bc9c-3aa3cca72c6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4daf2d1-17')
Dec 06 10:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4-userdata-shm.mount: Deactivated successfully.
Dec 06 10:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d46aa1eb56c671473c7b08a45b3cc7be7a0d7e60ad9f8373b5056483f751a6f5-merged.mount: Deactivated successfully.
Dec 06 10:06:53 compute-0 podman[262365]: 2025-12-06 10:06:53.790810167 +0000 UTC m=+0.135347531 container cleanup 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:06:53 compute-0 systemd[1]: libpod-conmon-21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4.scope: Deactivated successfully.
Dec 06 10:06:53 compute-0 ceph-mon[74327]: pgmap v770: 337 pgs: 337 active+clean; 178 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 70 KiB/s wr, 25 op/s
Dec 06 10:06:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:06:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:06:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:53 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 10:06:53 compute-0 podman[262425]: 2025-12-06 10:06:53.866752825 +0000 UTC m=+0.051050348 container remove 21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.872 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d698ad1e-289b-4e39-aa1f-9217c550b24a]: (4, ('Sat Dec  6 10:06:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f (21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4)\n21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4\nSat Dec  6 10:06:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f (21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4)\n21554fb920b8cd6e77291647b87089df9cd158749cc638bf38ae1f864899c4e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.874 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7c09be19-4a0c-482e-985e-5ea17f0b1576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.875 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap971faad6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 kernel: tap971faad6-f0: left promiscuous mode
Dec 06 10:06:53 compute-0 nova_compute[254819]: 2025-12-06 10:06:53.900 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.904 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1a2fdb-0723-461c-bd80-f036b4ffb785]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.915 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4d40db6b-8be3-4dce-aeaa-5bb463191d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.917 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bccfd7ba-8e9d-4b3c-b0fd-202ca92ef82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:06:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.942 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[db436cca-9741-4406-bf04-3f49288700d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391541, 'reachable_time': 37060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262441, 'error': None, 'target': 'ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d971faad6\x2df548\x2d4a54\x2dbc9c\x2d3aa3cca72c6f.mount: Deactivated successfully.
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.958 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-971faad6-f548-4a54-bc9c-3aa3cca72c6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:06:53 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:53.959 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[b31dd3dd-0b92-470e-a06d-9dc571fe551e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:06:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:06:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.120 254824 INFO nova.virt.libvirt.driver [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deleting instance files /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959_del
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.121 254824 INFO nova.virt.libvirt.driver [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deletion of /var/lib/nova/instances/9f4c3de7-de9e-45d5-b170-3469a0bd0959_del complete
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.188 254824 DEBUG nova.virt.libvirt.host [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.188 254824 INFO nova.virt.libvirt.host [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] UEFI support detected
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.190 254824 INFO nova.compute.manager [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.190 254824 DEBUG oslo.service.loopingcall [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.191 254824 DEBUG nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.191 254824 DEBUG nova.network.neutron [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:06:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.238 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:06:54.240 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 71 KiB/s wr, 43 op/s
Dec 06 10:06:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100654 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.811 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updated VIF entry in instance network info cache for port d4daf2d1-1774-4e84-b69b-60ba95ce1518. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.811 254824 DEBUG nova.network.neutron [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [{"id": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "address": "fa:16:3e:a5:32:83", "network": {"id": "971faad6-f548-4a54-bc9c-3aa3cca72c6f", "bridge": "br-int", "label": "tempest-network-smoke--878146770", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4daf2d1-17", "ovs_interfaceid": "d4daf2d1-1774-4e84-b69b-60ba95ce1518", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:06:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.844 254824 DEBUG nova.network.neutron [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.846 254824 DEBUG oslo_concurrency.lockutils [req-f9e86d60-d842-4860-8235-15343b77bb8d req-885348a8-7759-4b1c-8d8e-ca905092f03a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-9f4c3de7-de9e-45d5-b170-3469a0bd0959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.868 254824 INFO nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Took 0.68 seconds to deallocate network for instance.
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.914 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.915 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:54 compute-0 nova_compute[254819]: 2025-12-06 10:06:54.968 254824 DEBUG oslo_concurrency.processutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:06:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:06:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604483348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.430 254824 DEBUG oslo_concurrency.processutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.439 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.440 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.441 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.442 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.442 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.443 254824 WARNING nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-unplugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state deleted and task_state None.
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.443 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.444 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.444 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.445 254824 DEBUG oslo_concurrency.lockutils [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.445 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] No waiting events found dispatching network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.446 254824 WARNING nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received unexpected event network-vif-plugged-d4daf2d1-1774-4e84-b69b-60ba95ce1518 for instance with vm_state deleted and task_state None.
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.446 254824 DEBUG nova.compute.manager [req-cfaebd56-3a50-466f-a428-0b39e62f1d9f req-686a02bf-673c-4947-b39e-5c87abb17cfc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Received event network-vif-deleted-d4daf2d1-1774-4e84-b69b-60ba95ce1518 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.454 254824 DEBUG nova.compute.provider_tree [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.477 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.484 254824 DEBUG nova.scheduler.client.report [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.506 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.538 254824 INFO nova.scheduler.client.report [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 9f4c3de7-de9e-45d5-b170-3469a0bd0959
Dec 06 10:06:55 compute-0 nova_compute[254819]: 2025-12-06 10:06:55.600 254824 DEBUG oslo_concurrency.lockutils [None req-4fffce70-9b62-45c0-9f41-db6016a5ec2c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "9f4c3de7-de9e-45d5-b170-3469a0bd0959" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:55 compute-0 ceph-mon[74327]: pgmap v771: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 71 KiB/s wr, 43 op/s
Dec 06 10:06:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/604483348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:06:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Dec 06 10:06:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:06:57.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:06:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:06:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:06:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:06:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:57 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:06:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:06:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 06 10:06:58 compute-0 nova_compute[254819]: 2025-12-06 10:06:58.712 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:06:58 compute-0 ceph-mon[74327]: pgmap v772: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Dec 06 10:06:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 58 op/s
Dec 06 10:06:58 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:57504] [POST] [200] [0.002s] [4.0B] [4cb14160-5b65-4afb-a82e-30454655d65e] /api/prometheus_receiver
Dec 06 10:06:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:06:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:06:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:06:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:06:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:06:59 compute-0 podman[262471]: 2025-12-06 10:06:59.458814931 +0000 UTC m=+0.083482172 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 10:06:59 compute-0 ceph-mon[74327]: pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 58 op/s
Dec 06 10:07:00 compute-0 nova_compute[254819]: 2025-12-06 10:07:00.479 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:07:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:07:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:07:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 45 op/s
Dec 06 10:07:00 compute-0 sudo[262492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:00 compute-0 sudo[262492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:00 compute-0 sudo[262492]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:00 compute-0 sudo[262517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:07:00 compute-0 sudo[262517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec 06 10:07:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Dec 06 10:07:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:00.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:01.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:01 compute-0 sudo[262517]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:07:01 compute-0 sudo[262577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:01 compute-0 sudo[262577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:01 compute-0 sudo[262577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:01 compute-0 sudo[262602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:07:01 compute-0 sudo[262602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:01 compute-0 ceph-mon[74327]: pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 45 op/s
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:07:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.064049169 +0000 UTC m=+0.065613670 container create 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:07:02 compute-0 systemd[1]: Started libpod-conmon-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope.
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.036204809 +0000 UTC m=+0.037769370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.183652975 +0000 UTC m=+0.185217536 container init 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.196639016 +0000 UTC m=+0.198203517 container start 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.201764233 +0000 UTC m=+0.203328784 container attach 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 10:07:02 compute-0 funny_khayyam[262681]: 167 167
Dec 06 10:07:02 compute-0 systemd[1]: libpod-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope: Deactivated successfully.
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.206116581 +0000 UTC m=+0.207681092 container died 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2def1e8bd4a3c7f75247f391a9a808d0347d552533bc6cb2c6ff24451dcad812-merged.mount: Deactivated successfully.
Dec 06 10:07:02 compute-0 podman[262665]: 2025-12-06 10:07:02.259997054 +0000 UTC m=+0.261561565 container remove 0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:07:02 compute-0 systemd[1]: libpod-conmon-0e80c98a324597fd0bb7281ed8d7645d3256d40f7fa756560ee0910b1a3f35a6.scope: Deactivated successfully.
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.490723997 +0000 UTC m=+0.057083501 container create ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:07:02 compute-0 systemd[1]: Started libpod-conmon-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope.
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.466736679 +0000 UTC m=+0.033096213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.577304261 +0000 UTC m=+0.143663815 container init ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.591898455 +0000 UTC m=+0.158257939 container start ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.595774849 +0000 UTC m=+0.162134423 container attach ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 10:07:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:02 compute-0 ceph-mon[74327]: pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Dec 06 10:07:02 compute-0 nova_compute[254819]: 2025-12-06 10:07:02.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:02 compute-0 sweet_fermi[262724]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:07:02 compute-0 sweet_fermi[262724]: --> All data devices are unavailable
Dec 06 10:07:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:02 compute-0 systemd[1]: libpod-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope: Deactivated successfully.
Dec 06 10:07:02 compute-0 conmon[262724]: conmon ce59a0a3bd28c14d7042 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope/container/memory.events
Dec 06 10:07:02 compute-0 podman[262707]: 2025-12-06 10:07:02.958642706 +0000 UTC m=+0.525002270 container died ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cf3b041e416d272e1ad027e5e821e6d3b9307f1c78f443039853b31047a3049-merged.mount: Deactivated successfully.
Dec 06 10:07:02 compute-0 nova_compute[254819]: 2025-12-06 10:07:02.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:03 compute-0 podman[262707]: 2025-12-06 10:07:03.020304068 +0000 UTC m=+0.586663562 container remove ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:07:03 compute-0 systemd[1]: libpod-conmon-ce59a0a3bd28c14d7042ac69743ef42822e39e08c2fca4cafcdd276cfb27f38c.scope: Deactivated successfully.
Dec 06 10:07:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:03.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:03 compute-0 sudo[262602]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:03 compute-0 sudo[262752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:03 compute-0 sudo[262752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:03 compute-0 sudo[262752]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:03 compute-0 sudo[262777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:07:03 compute-0 sudo[262777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Dec 06 10:07:03 compute-0 nova_compute[254819]: 2025-12-06 10:07:03.750 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.7716402 +0000 UTC m=+0.061540380 container create 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 10:07:03 compute-0 systemd[1]: Started libpod-conmon-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope.
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.743004708 +0000 UTC m=+0.032904938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.868093081 +0000 UTC m=+0.157993281 container init 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.878589524 +0000 UTC m=+0.168489694 container start 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.882580481 +0000 UTC m=+0.172480661 container attach 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 10:07:03 compute-0 affectionate_goodall[262862]: 167 167
Dec 06 10:07:03 compute-0 systemd[1]: libpod-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope: Deactivated successfully.
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.885678606 +0000 UTC m=+0.175578756 container died 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20e69d3b3cae0743c5620068554120ae0b8d6406cbfffc688e13ecf495dad12-merged.mount: Deactivated successfully.
Dec 06 10:07:03 compute-0 podman[262846]: 2025-12-06 10:07:03.92442136 +0000 UTC m=+0.214321510 container remove 171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:07:03 compute-0 systemd[1]: libpod-conmon-171aaaf99588610e77638d459ad890e89ea1d234cc85f57e026d3698a8442beb.scope: Deactivated successfully.
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.141450303 +0000 UTC m=+0.069648219 container create cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 06 10:07:04 compute-0 systemd[1]: Started libpod-conmon-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope.
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.111900806 +0000 UTC m=+0.040098802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.245198701 +0000 UTC m=+0.173396627 container init cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.258053037 +0000 UTC m=+0.186250953 container start cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.261884271 +0000 UTC m=+0.190082197 container attach cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]: {
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:     "1": [
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:         {
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "devices": [
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "/dev/loop3"
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             ],
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "lv_name": "ceph_lv0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "lv_size": "21470642176",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "name": "ceph_lv0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "tags": {
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.cluster_name": "ceph",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.crush_device_class": "",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.encrypted": "0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.osd_id": "1",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.type": "block",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.vdo": "0",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:                 "ceph.with_tpm": "0"
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             },
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "type": "block",
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:             "vg_name": "ceph_vg0"
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:         }
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]:     ]
Dec 06 10:07:04 compute-0 relaxed_lalande[262900]: }
Dec 06 10:07:04 compute-0 systemd[1]: libpod-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope: Deactivated successfully.
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.600226306 +0000 UTC m=+0.528424212 container died cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-af378cc9f29416217231a62af50ce37004a59efc7271286c10e963cd8efb4282-merged.mount: Deactivated successfully.
Dec 06 10:07:04 compute-0 podman[262883]: 2025-12-06 10:07:04.644091068 +0000 UTC m=+0.572288994 container remove cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:07:04 compute-0 systemd[1]: libpod-conmon-cdbe4134b65b2132f631750a7b2f9f88b471a726ffe9b5b6c05efdc4acad9abf.scope: Deactivated successfully.
Dec 06 10:07:04 compute-0 sudo[262777]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:04 compute-0 podman[262909]: 2025-12-06 10:07:04.735525294 +0000 UTC m=+0.103894523 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 10:07:04 compute-0 sudo[262939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:04 compute-0 sudo[262939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:04 compute-0 sudo[262939]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:04 compute-0 sudo[262970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:07:04 compute-0 sudo[262970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:04 compute-0 ceph-mon[74327]: pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Dec 06 10:07:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:05.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.273873502 +0000 UTC m=+0.054360817 container create ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 10:07:05 compute-0 systemd[1]: Started libpod-conmon-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope.
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.250848001 +0000 UTC m=+0.031335346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.372465441 +0000 UTC m=+0.152952766 container init ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.381866975 +0000 UTC m=+0.162354280 container start ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.385723488 +0000 UTC m=+0.166210823 container attach ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:07:05 compute-0 frosty_chatelet[263051]: 167 167
Dec 06 10:07:05 compute-0 systemd[1]: libpod-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope: Deactivated successfully.
Dec 06 10:07:05 compute-0 conmon[263051]: conmon ab3be5cfdf3842f133d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope/container/memory.events
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.394173606 +0000 UTC m=+0.174660941 container died ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f8d974e6e22a60de7e98ac26d15a3765683d3df75263191e336edd3dfc1bc7-merged.mount: Deactivated successfully.
Dec 06 10:07:05 compute-0 podman[263034]: 2025-12-06 10:07:05.436925269 +0000 UTC m=+0.217412604 container remove ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:07:05 compute-0 systemd[1]: libpod-conmon-ab3be5cfdf3842f133d7786282395487a31549deccc97edd9962d1bc097c68b0.scope: Deactivated successfully.
Dec 06 10:07:05 compute-0 nova_compute[254819]: 2025-12-06 10:07:05.481 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec 06 10:07:05 compute-0 podman[263076]: 2025-12-06 10:07:05.660232611 +0000 UTC m=+0.046345391 container create 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 10:07:05 compute-0 systemd[1]: Started libpod-conmon-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope.
Dec 06 10:07:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:05 compute-0 podman[263076]: 2025-12-06 10:07:05.640966372 +0000 UTC m=+0.027079142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:05 compute-0 podman[263076]: 2025-12-06 10:07:05.754441342 +0000 UTC m=+0.140554082 container init 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:07:05 compute-0 podman[263076]: 2025-12-06 10:07:05.766908888 +0000 UTC m=+0.153021628 container start 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 10:07:05 compute-0 podman[263076]: 2025-12-06 10:07:05.770225478 +0000 UTC m=+0.156338218 container attach 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:07:06 compute-0 lvm[263167]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:07:06 compute-0 lvm[263167]: VG ceph_vg0 finished
Dec 06 10:07:06 compute-0 boring_wescoff[263092]: {}
Dec 06 10:07:06 compute-0 systemd[1]: libpod-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Deactivated successfully.
Dec 06 10:07:06 compute-0 systemd[1]: libpod-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Consumed 1.283s CPU time.
Dec 06 10:07:06 compute-0 podman[263076]: 2025-12-06 10:07:06.565383821 +0000 UTC m=+0.951496561 container died 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d8e08af108a894f3bbba13ad20ff45f5d928ee103dfb60c44940ffebaf15b1f-merged.mount: Deactivated successfully.
Dec 06 10:07:06 compute-0 podman[263076]: 2025-12-06 10:07:06.61240531 +0000 UTC m=+0.998518050 container remove 540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wescoff, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:07:06 compute-0 systemd[1]: libpod-conmon-540aeedfc8c24406eed5baf559b3ce5b3b9d9577915d47201793d62282f818ee.scope: Deactivated successfully.
Dec 06 10:07:06 compute-0 sudo[262970]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:07:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:07:06 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:06 compute-0 sudo[263180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:07:06 compute-0 sudo[263180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:06 compute-0 sudo[263180]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:07:06 compute-0 ceph-mon[74327]: pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec 06 10:07:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:07:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32ac000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:06.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:07.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec 06 10:07:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:08 compute-0 nova_compute[254819]: 2025-12-06 10:07:08.688 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015613.6867456, 9f4c3de7-de9e-45d5-b170-3469a0bd0959 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:07:08 compute-0 nova_compute[254819]: 2025-12-06 10:07:08.688 254824 INFO nova.compute.manager [-] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] VM Stopped (Lifecycle Event)
Dec 06 10:07:08 compute-0 nova_compute[254819]: 2025-12-06 10:07:08.709 254824 DEBUG nova.compute.manager [None req-4b996106-44a3-4e11-9ee0-853ed8978bb7 - - - - - -] [instance: 9f4c3de7-de9e-45d5-b170-3469a0bd0959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:07:08 compute-0 nova_compute[254819]: 2025-12-06 10:07:08.752 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:08.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:07:08 compute-0 ceph-mon[74327]: pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100708 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:07:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:07:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:09.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:09 compute-0 podman[263223]: 2025-12-06 10:07:09.485853311 +0000 UTC m=+0.106384300 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:07:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 06 10:07:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:09 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:07:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:09 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:07:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:10 compute-0 nova_compute[254819]: 2025-12-06 10:07:10.483 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec 06 10:07:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec 06 10:07:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:10 compute-0 ceph-mon[74327]: pgmap v779: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 06 10:07:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 10:07:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 10:07:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 06 10:07:11 compute-0 sudo[263245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:07:11 compute-0 sudo[263245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:11 compute-0 sudo[263245]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:11 compute-0 ceph-mon[74327]: pgmap v780: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 06 10:07:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:07:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:12 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:12.942 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:07:12 compute-0 nova_compute[254819]: 2025-12-06 10:07:12.943 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:12 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:12.945 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:07:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 10:07:13 compute-0 nova_compute[254819]: 2025-12-06 10:07:13.755 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:14 compute-0 ceph-mon[74327]: pgmap v781: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 06 10:07:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:15.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:15 compute-0 nova_compute[254819]: 2025-12-06 10:07:15.484 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:16 compute-0 ceph-mon[74327]: pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100716 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:07:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:17.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:17 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:17.949 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:18 compute-0 ceph-mon[74327]: pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.794 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.794 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.800 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.813 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:07:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.900 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.901 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.914 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:07:18 compute-0 nova_compute[254819]: 2025-12-06 10:07:18.915 254824 INFO nova.compute.claims [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:07:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.048 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:07:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919256904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.538 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.545 254824 DEBUG nova.compute.provider_tree [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.571 254824 DEBUG nova.scheduler.client.report [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.602 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.603 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:07:19 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1919256904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.846 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.847 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.873 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:07:19 compute-0 nova_compute[254819]: 2025-12-06 10:07:19.901 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.016 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.017 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.018 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating image(s)
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.047 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.076 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.107 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.111 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.135 254824 DEBUG nova.policy [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.183 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.184 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.186 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.186 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.218 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.222 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.487 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.556 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.620 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.721 254824 DEBUG nova.objects.instance [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:07:20 compute-0 ceph-mon[74327]: pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 06 10:07:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec 06 10:07:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Dec 06 10:07:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.962 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.963 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Ensure instance console log exists: /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:07:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:20.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.963 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.964 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:20 compute-0 nova_compute[254819]: 2025-12-06 10:07:20.965 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:21.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:21 compute-0 nova_compute[254819]: 2025-12-06 10:07:21.128 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully created port: a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:07:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.174 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully updated port: a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.188 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.318 254824 DEBUG nova.compute.manager [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.319 254824 DEBUG nova.compute.manager [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.319 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:22 compute-0 nova_compute[254819]: 2025-12-06 10:07:22.409 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:07:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:22 compute-0 ceph-mon[74327]: pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 06 10:07:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:22.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.322 254824 DEBUG nova.network.neutron [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.351 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.351 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance network_info: |[{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.352 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.353 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.359 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start _get_guest_xml network_info=[{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.365 254824 WARNING nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.370 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.370 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.377 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.377 254824 DEBUG nova.virt.libvirt.host [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.378 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.379 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.380 254824 DEBUG nova.virt.hardware [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.383 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.803 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:07:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093221170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:07:23
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'backups', 'default.rgw.control']
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.869 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:23 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4093221170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.908 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.909717) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643909800, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1098, "num_deletes": 251, "total_data_size": 1883115, "memory_usage": 1915712, "flush_reason": "Manual Compaction"}
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 06 10:07:23 compute-0 nova_compute[254819]: 2025-12-06 10:07:23.914 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643929674, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1820143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23758, "largest_seqno": 24855, "table_properties": {"data_size": 1815003, "index_size": 2600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11653, "raw_average_key_size": 19, "raw_value_size": 1804422, "raw_average_value_size": 3084, "num_data_blocks": 116, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015553, "oldest_key_time": 1765015553, "file_creation_time": 1765015643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 19986 microseconds, and 8497 cpu microseconds.
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:07:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:07:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.929713) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1820143 bytes OK
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.929733) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932558) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932573) EVENT_LOG_v1 {"time_micros": 1765015643932568, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.932593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1878082, prev total WAL file size 1878082, number of live WAL files 2.
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.933497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1777KB)], [53(13MB)]
Dec 06 10:07:23 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015643933557, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15534641, "oldest_snapshot_seqno": -1}
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5430 keys, 13333901 bytes, temperature: kUnknown
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644105422, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13333901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13297640, "index_size": 21559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139361, "raw_average_key_size": 25, "raw_value_size": 13199277, "raw_average_value_size": 2430, "num_data_blocks": 875, "num_entries": 5430, "num_filter_entries": 5430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.105914) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13333901 bytes
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.107645) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.2 rd, 77.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 5947, records dropped: 517 output_compression: NoCompression
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.107701) EVENT_LOG_v1 {"time_micros": 1765015644107656, "job": 28, "event": "compaction_finished", "compaction_time_micros": 172166, "compaction_time_cpu_micros": 26388, "output_level": 6, "num_output_files": 1, "total_output_size": 13333901, "num_input_records": 5947, "num_output_records": 5430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644108539, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015644111625, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:23.933394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:07:24.111825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:07:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:07:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175523667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.440 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.442 254824 DEBUG nova.virt.libvirt.vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:07:19Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.443 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.443 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.444 254824 DEBUG nova.objects.instance [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.461 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <name>instance-00000003</name>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:07:23</nova:creationTime>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:07:24 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <system>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="serial">2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="uuid">2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </system>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <os>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </os>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <features>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </features>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk">
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </source>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config">
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </source>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:07:24 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:6c:29:20"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <target dev="tapa7f5880e-0f"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log" append="off"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <video>
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </video>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:07:24 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:07:24 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:07:24 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:07:24 compute-0 nova_compute[254819]: </domain>
Dec 06 10:07:24 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.462 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Preparing to wait for external event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.463 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.464 254824 DEBUG nova.virt.libvirt.vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:07:19Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.465 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:07:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.465 254824 DEBUG nova.network.os_vif_util [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG os_vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.466 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.467 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.473 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7f5880e-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.474 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7f5880e-0f, col_values=(('external_ids', {'iface-id': 'a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:29:20', 'vm-uuid': '2ef62e22-52fc-44f3-9964-8dc9b3c20686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.476 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:24 compute-0 NetworkManager[48882]: <info>  [1765015644.4776] manager: (tapa7f5880e-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.478 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:07:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.487 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.488 254824 INFO os_vif [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f')
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.557 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.557 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.558 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6c:29:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.558 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Using config drive
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.584 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.657 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.657 254824 DEBUG nova.network.neutron [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:07:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:24 compute-0 ceph-mon[74327]: pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 10:07:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:24 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/175523667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:07:24 compute-0 nova_compute[254819]: 2025-12-06 10:07:24.930 254824 DEBUG oslo_concurrency.lockutils [req-8995ccd3-fc20-4525-9bf0-cbf15b074d89 req-3049b225-a5ec-45c4-8487-c20a172f5e30 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:07:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:24.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:25.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:25 compute-0 nova_compute[254819]: 2025-12-06 10:07:25.489 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:07:25 compute-0 ceph-mon[74327]: pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:07:25 compute-0 nova_compute[254819]: 2025-12-06 10:07:25.948 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Creating config drive at /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config
Dec 06 10:07:25 compute-0 nova_compute[254819]: 2025-12-06 10:07:25.954 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9fzg95k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.095 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn9fzg95k" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.133 254824 DEBUG nova.storage.rbd_utils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.138 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.321 254824 DEBUG oslo_concurrency.processutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config 2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.323 254824 INFO nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deleting local config drive /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/disk.config because it was imported into RBD.
Dec 06 10:07:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:26 compute-0 kernel: tapa7f5880e-0f: entered promiscuous mode
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.3861] manager: (tapa7f5880e-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 06 10:07:26 compute-0 ovn_controller[152417]: 2025-12-06T10:07:26Z|00036|binding|INFO|Claiming lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for this chassis.
Dec 06 10:07:26 compute-0 ovn_controller[152417]: 2025-12-06T10:07:26Z|00037|binding|INFO|a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7: Claiming fa:16:3e:6c:29:20 10.100.0.12
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.388 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.407 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:29:20 10.100.0.12'], port_security=['fa:16:3e:6c:29:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f18b54b7-70a3-4b32-8644-f822c2e837c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75f33c5-f6d1-4d65-a2b0-b56ec14fd7b3, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.408 162267 INFO neutron.agent.ovn.metadata.agent [-] Port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 in datapath 4d9eb8be-73ac-4cfc-8821-fb41b5868957 bound to our chassis
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.410 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d9eb8be-73ac-4cfc-8821-fb41b5868957
Dec 06 10:07:26 compute-0 systemd-machined[216202]: New machine qemu-2-instance-00000003.
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.426 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[01f1754f-f155-4c22-ae27-839bef3fe411]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.428 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d9eb8be-71 in ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.430 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d9eb8be-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.430 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[aeafe6e1-66b3-4ddd-9238-8d58bd2e1898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.431 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8a03b3-6a30-4048-8737-a1be75475028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.445 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[39671493-c429-4cbe-b558-949edd7f98e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Dec 06 10:07:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:26 compute-0 systemd-udevd[263611]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.481 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[addc850e-3ec3-45c2-8afe-79e23c250958]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.4877] device (tapa7f5880e-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.4884] device (tapa7f5880e-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.496 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:26 compute-0 ovn_controller[152417]: 2025-12-06T10:07:26Z|00038|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 ovn-installed in OVS
Dec 06 10:07:26 compute-0 ovn_controller[152417]: 2025-12-06T10:07:26Z|00039|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 up in Southbound
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.502 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.520 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ef51f6-074d-4e1c-9fa4-c158f91272b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.525 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d5acd5b8-1362-4597-938b-2e3da6418870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.5264] manager: (tap4d9eb8be-70): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.558 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ee37e136-21dc-4c05-b6bd-d8cd2134e12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.561 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0c649a44-e408-4344-a2e1-475429562418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.5833] device (tap4d9eb8be-70): carrier: link connected
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.588 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[16d18a8f-5307-46af-8e57-6bd94d52f9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.605 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2c76331a-3945-40ab-a4e5-195349676238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d9eb8be-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:61:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401706, 'reachable_time': 22692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263641, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.621 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f51035-e4e9-46c1-8880-bdc699e66254]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6106'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 401706, 'tstamp': 401706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263642, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.637 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c58c1c6a-e80f-47cc-8ac9-36d6e09f5e3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d9eb8be-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:61:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401706, 'reachable_time': 22692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263643, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.679 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[84e892f4-e501-4da9-bd09-ceec99119b60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.737 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[00f0868a-0204-4d30-9f5d-a3d5ec1aa069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.738 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d9eb8be-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.739 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.739 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d9eb8be-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.741 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:26 compute-0 NetworkManager[48882]: <info>  [1765015646.7418] manager: (tap4d9eb8be-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec 06 10:07:26 compute-0 kernel: tap4d9eb8be-70: entered promiscuous mode
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.743 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d9eb8be-70, col_values=(('external_ids', {'iface-id': '614c688d-e8cc-4f61-86da-0aa3c3ee7fd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:26 compute-0 ovn_controller[152417]: 2025-12-06T10:07:26Z|00040|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.767 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.768 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc7650f-f71c-4be1-b5ea-492ff678423c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.769 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-4d9eb8be-73ac-4cfc-8821-fb41b5868957
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/4d9eb8be-73ac-4cfc-8821-fb41b5868957.pid.haproxy
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID 4d9eb8be-73ac-4cfc-8821-fb41b5868957
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:07:26 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:26.770 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'env', 'PROCESS_TAG=haproxy-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d9eb8be-73ac-4cfc-8821-fb41b5868957.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:07:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.971 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015646.9711125, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:07:26 compute-0 nova_compute[254819]: 2025-12-06 10:07:26.972 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Started (Lifecycle Event)
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.003 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.007 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015646.971947, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.007 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Paused (Lifecycle Event)
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.025 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.029 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.049 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:07:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:27.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.150 254824 DEBUG nova.compute.manager [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.150 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG oslo_concurrency.lockutils [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.151 254824 DEBUG nova.compute.manager [req-ffa705c9-3ad4-4c59-946c-85400704ae93 req-ab0bf95c-7275-4363-adfc-44c111e45eba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Processing event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.152 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.155 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015647.1555302, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.156 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Resumed (Lifecycle Event)
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.157 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:07:27 compute-0 podman[263717]: 2025-12-06 10:07:27.159579705 +0000 UTC m=+0.050960696 container create d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.160 254824 INFO nova.virt.libvirt.driver [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance spawned successfully.
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.161 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.176 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.183 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.185 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.186 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.186 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.187 254824 DEBUG nova.virt.libvirt.driver [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:07:27 compute-0 systemd[1]: Started libpod-conmon-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope.
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.221 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:07:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:07:27 compute-0 podman[263717]: 2025-12-06 10:07:27.132500394 +0000 UTC m=+0.023881405 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96304682dba270089a316a6ea2c840eb8d50d3698a98881b517984b3b6c64718/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:07:27 compute-0 podman[263717]: 2025-12-06 10:07:27.24358931 +0000 UTC m=+0.134970401 container init d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:07:27 compute-0 podman[263717]: 2025-12-06 10:07:27.257172216 +0000 UTC m=+0.148553207 container start d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 10:07:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:07:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.268Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:07:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:27.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.272 254824 INFO nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 7.26 seconds to spawn the instance on the hypervisor.
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.273 254824 DEBUG nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:07:27 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : New worker (263739) forked
Dec 06 10:07:27 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : Loading success.
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.362 254824 INFO nova.compute.manager [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 8.50 seconds to build instance.
Dec 06 10:07:27 compute-0 nova_compute[254819]: 2025-12-06 10:07:27.380 254824 DEBUG oslo_concurrency.lockutils [None req-3ce5c96a-eaa1-4d77-94c1-f122462b803d 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:07:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:28 compute-0 ceph-mon[74327]: pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:07:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.248 254824 DEBUG nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.249 254824 DEBUG oslo_concurrency.lockutils [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.250 254824 DEBUG nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.250 254824 WARNING nova.compute.manager [req-3ac67a8b-c543-4020-b289-a81819117029 req-2114bc02-0c8e-4251-9515-f4acccfdc695 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with vm_state active and task_state None.
Dec 06 10:07:29 compute-0 nova_compute[254819]: 2025-12-06 10:07:29.478 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 10:07:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:30 compute-0 podman[263751]: 2025-12-06 10:07:30.43958511 +0000 UTC m=+0.060908793 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:07:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:30 compute-0 nova_compute[254819]: 2025-12-06 10:07:30.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:30 compute-0 ceph-mon[74327]: pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 10:07:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 06 10:07:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Dec 06 10:07:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:30.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:31 compute-0 NetworkManager[48882]: <info>  [1765015651.1053] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec 06 10:07:31 compute-0 NetworkManager[48882]: <info>  [1765015651.1060] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.104 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:31 compute-0 ovn_controller[152417]: 2025-12-06T10:07:31Z|00041|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec 06 10:07:31 compute-0 ovn_controller[152417]: 2025-12-06T10:07:31Z|00042|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.155 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.608 254824 DEBUG nova.compute.manager [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG nova.compute.manager [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.609 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:31 compute-0 nova_compute[254819]: 2025-12-06 10:07:31.610 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:07:31 compute-0 sudo[263775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:07:31 compute-0 sudo[263775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:31 compute-0 sudo[263775]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:32 compute-0 ceph-mon[74327]: pgmap v790: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 10:07:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0001e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:33 compute-0 nova_compute[254819]: 2025-12-06 10:07:33.011 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:07:33 compute-0 nova_compute[254819]: 2025-12-06 10:07:33.012 254824 DEBUG nova.network.neutron [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:07:33 compute-0 nova_compute[254819]: 2025-12-06 10:07:33.035 254824 DEBUG oslo_concurrency.lockutils [req-5b8454c7-398f-4b3e-aafb-433ea70801c2 req-d341ac27-2a3e-4248-85bd-387c14b5e16c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:07:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:07:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:34 compute-0 nova_compute[254819]: 2025-12-06 10:07:34.481 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:34 compute-0 ceph-mon[74327]: pgmap v791: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:07:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:35 compute-0 podman[263803]: 2025-12-06 10:07:35.461437809 +0000 UTC m=+0.089869335 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:07:35 compute-0 nova_compute[254819]: 2025-12-06 10:07:35.494 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:36 compute-0 ceph-mon[74327]: pgmap v792: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:37.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:38 compute-0 ceph-mon[74327]: pgmap v793: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:38 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:07:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.485 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:07:39 compute-0 nova_compute[254819]: 2025-12-06 10:07:39.777 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:40 compute-0 ovn_controller[152417]: 2025-12-06T10:07:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:29:20 10.100.0.12
Dec 06 10:07:40 compute-0 ovn_controller[152417]: 2025-12-06T10:07:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:29:20 10.100.0.12
Dec 06 10:07:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:07:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698280352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.282 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.368 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.369 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:07:40 compute-0 podman[263858]: 2025-12-06 10:07:40.404996047 +0000 UTC m=+0.070083781 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:07:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.497 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.555 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.557 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4391MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.558 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.558 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.630 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:07:40 compute-0 nova_compute[254819]: 2025-12-06 10:07:40.668 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:40 compute-0 ceph-mon[74327]: pgmap v794: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:07:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/698280352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:07:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:07:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:40 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:07:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639730022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:41 compute-0 nova_compute[254819]: 2025-12-06 10:07:41.126 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:41 compute-0 nova_compute[254819]: 2025-12-06 10:07:41.132 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:07:41 compute-0 nova_compute[254819]: 2025-12-06 10:07:41.147 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:07:41 compute-0 nova_compute[254819]: 2025-12-06 10:07:41.166 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:07:41 compute-0 nova_compute[254819]: 2025-12-06 10:07:41.167 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 06 10:07:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3639730022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.168 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.169 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:07:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:42 compute-0 ceph-mon[74327]: pgmap v795: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 06 10:07:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/656854972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4086081083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.751 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.751 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:07:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:42 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.984 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:07:42 compute-0 nova_compute[254819]: 2025-12-06 10:07:42.985 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:07:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec 06 10:07:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:07:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5517 writes, 24K keys, 5516 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 5517 writes, 5516 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1523 writes, 6794 keys, 1523 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s
                                           Interval WAL: 1523 writes, 1523 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.0      0.46              0.11        14    0.033       0      0       0.0       0.0
                                             L6      1/0   12.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.4     96.5     83.8      2.02              0.46        13    0.156     67K   6762       0.0       0.0
                                            Sum      1/0   12.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   5.4     78.6     83.7      2.49              0.57        27    0.092     67K   6762       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     90.2     91.1      0.99              0.24        12    0.083     34K   3113       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     96.5     83.8      2.02              0.46        13    0.156     67K   6762       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.2      0.46              0.11        13    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 2.5 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 13.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000118 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(740,13.11 MB,4.3119%) FilterBlock(28,201.30 KB,0.0646641%) IndexBlock(28,356.02 KB,0.114366%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:07:44 compute-0 nova_compute[254819]: 2025-12-06 10:07:44.151 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:07:44 compute-0 nova_compute[254819]: 2025-12-06 10:07:44.178 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:07:44 compute-0 nova_compute[254819]: 2025-12-06 10:07:44.179 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:07:44 compute-0 nova_compute[254819]: 2025-12-06 10:07:44.180 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:44 compute-0 nova_compute[254819]: 2025-12-06 10:07:44.488 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:44 compute-0 ceph-mon[74327]: pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec 06 10:07:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/399366570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3598102245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:07:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:44 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:45.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:45 compute-0 nova_compute[254819]: 2025-12-06 10:07:45.498 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:07:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:07:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:07:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:07:46 compute-0 nova_compute[254819]: 2025-12-06 10:07:46.026 254824 INFO nova.compute.manager [None req-a4dff3fb-086c-491f-ac98-f0609a3e12cd 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Get console output
Dec 06 10:07:46 compute-0 nova_compute[254819]: 2025-12-06 10:07:46.034 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:07:46 compute-0 nova_compute[254819]: 2025-12-06 10:07:46.171 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:46 compute-0 ceph-mon[74327]: pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:07:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3571976026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:07:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:46 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:07:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:07:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:47.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:48.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:48 compute-0 ceph-mon[74327]: pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:48 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:49 compute-0 nova_compute[254819]: 2025-12-06 10:07:49.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:50 compute-0 nova_compute[254819]: 2025-12-06 10:07:50.503 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:07:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:07:50 compute-0 ceph-mon[74327]: pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:50 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:51 compute-0 sudo[263910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:07:51 compute-0 sudo[263910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:07:51 compute-0 sudo[263910]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:52 compute-0 nova_compute[254819]: 2025-12-06 10:07:52.239 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:52 compute-0 nova_compute[254819]: 2025-12-06 10:07:52.240 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:52 compute-0 nova_compute[254819]: 2025-12-06 10:07:52.241 254824 DEBUG nova.objects.instance [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:07:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:52 compute-0 nova_compute[254819]: 2025-12-06 10:07:52.756 254824 DEBUG nova.objects.instance [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:07:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:52 compute-0 ceph-mon[74327]: pgmap v800: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:07:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:52 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:52 compute-0 nova_compute[254819]: 2025-12-06 10:07:52.963 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:07:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:07:53 compute-0 nova_compute[254819]: 2025-12-06 10:07:53.823 254824 DEBUG nova.policy [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:07:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:07:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:53 compute-0 ceph-mon[74327]: pgmap v801: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:07:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:07:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:07:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:07:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.239 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.240 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:54 compute-0 nova_compute[254819]: 2025-12-06 10:07:54.521 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:54 compute-0 nova_compute[254819]: 2025-12-06 10:07:54.912 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully created port: bf396b58-3b48-44ae-92bd-e71275c9883c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:07:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:54 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.506 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.854 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Successfully updated port: bf396b58-3b48-44ae-92bd-e71275c9883c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.879 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.984 254824 DEBUG nova.compute.manager [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.985 254824 DEBUG nova.compute.manager [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-bf396b58-3b48-44ae-92bd-e71275c9883c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:07:55 compute-0 nova_compute[254819]: 2025-12-06 10:07:55.986 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:07:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:56 compute-0 ceph-mon[74327]: pgmap v802: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 10:07:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:56 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:07:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:07:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:07:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:07:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:57.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 10:07:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:07:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:58 compute-0 ceph-mon[74327]: pgmap v803: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 10:07:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:07:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:07:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:07:58 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:07:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:07:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:07:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:07:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:07:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.177 254824 DEBUG nova.network.neutron [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.202 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.204 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.204 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port bf396b58-3b48-44ae-92bd-e71275c9883c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.208 254824 DEBUG nova.virt.libvirt.vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.208 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.209 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.210 254824 DEBUG os_vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.210 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.211 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.211 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.215 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.215 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf396b58-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.216 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf396b58-3b, col_values=(('external_ids', {'iface-id': 'bf396b58-3b48-44ae-92bd-e71275c9883c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:56:e3', 'vm-uuid': '2ef62e22-52fc-44f3-9964-8dc9b3c20686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.218 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.2195] manager: (tapbf396b58-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.223 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.229 254824 INFO os_vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.230 254824 DEBUG nova.virt.libvirt.vif [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.231 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.231 254824 DEBUG nova.network.os_vif_util [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.234 254824 DEBUG nova.virt.libvirt.guest [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] attach device xml: <interface type="ethernet">
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:56:e3"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <target dev="tapbf396b58-3b"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]: </interface>
Dec 06 10:07:59 compute-0 nova_compute[254819]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 10:07:59 compute-0 kernel: tapbf396b58-3b: entered promiscuous mode
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.2522] manager: (tapbf396b58-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec 06 10:07:59 compute-0 ovn_controller[152417]: 2025-12-06T10:07:59Z|00043|binding|INFO|Claiming lport bf396b58-3b48-44ae-92bd-e71275c9883c for this chassis.
Dec 06 10:07:59 compute-0 ovn_controller[152417]: 2025-12-06T10:07:59Z|00044|binding|INFO|bf396b58-3b48-44ae-92bd-e71275c9883c: Claiming fa:16:3e:9c:56:e3 10.100.0.20
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.262 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:e3 10.100.0.20'], port_security=['fa:16:3e:9c:56:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b700d432-ed1c-4e29-8f64-6e35196305aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e1a9f4d-accf-4c87-b819-872eff5f1a0b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=bf396b58-3b48-44ae-92bd-e71275c9883c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.263 162267 INFO neutron.agent.ovn.metadata.agent [-] Port bf396b58-3b48-44ae-92bd-e71275c9883c in datapath b700d432-ed1c-4e29-8f64-6e35196305aa bound to our chassis
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.264 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b700d432-ed1c-4e29-8f64-6e35196305aa
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.284 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4fda7e65-103a-407a-a238-9e12dfd1fd4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.285 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb700d432-e1 in ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.286 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb700d432-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.286 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7a10e162-3754-4dd6-8aa7-282cbf167aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.287 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e4628cd5-a1f7-46ff-8180-a420270ea038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 systemd-udevd[263949]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.292 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 ovn_controller[152417]: 2025-12-06T10:07:59Z|00045|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c ovn-installed in OVS
Dec 06 10:07:59 compute-0 ovn_controller[152417]: 2025-12-06T10:07:59Z|00046|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c up in Southbound
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.303 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.308 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[a05d6be8-9583-47bb-8103-eadfbdcb0b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.3139] device (tapbf396b58-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.3153] device (tapbf396b58-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.334 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ba3fdd-1e2c-4afb-b2fb-cfc1ce6cd24a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.339 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6c:29:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.340 254824 DEBUG nova.virt.libvirt.driver [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:9c:56:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.370 254824 DEBUG nova.virt.libvirt.guest [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:07:59 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec 06 10:07:59 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec 06 10:07:59 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:07:59 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:07:59 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:07:59 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.375 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ea333388-d071-4553-83b3-420f0a3dddfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.3810] manager: (tapb700d432-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.380 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4e9bc-f11d-49ab-a278-cb32e7fb1524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.398 254824 DEBUG oslo_concurrency.lockutils [None req-e0183c1c-acb3-4d93-b267-e0c2ccf7ac17 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.419 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[d73289dc-faab-4310-ac51-0ca672a095af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.422 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[0d808f3e-3f46-4abb-830d-e8e5258673c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.4459] device (tapb700d432-e0): carrier: link connected
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.451 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[72bb8b6f-eeca-4dcc-94eb-e4056c09a1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.471 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[86716937-dd6d-4ea1-aa6d-8617cb4a63f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb700d432-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:b5:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404993, 'reachable_time': 29733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263977, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.491 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6539b9-0e17-4967-adff-9f7e07c0434b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:b522'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 404993, 'tstamp': 404993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263978, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.509 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fe18e223-f0c2-4758-9c03-6962a8bb6f96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb700d432-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:b5:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404993, 'reachable_time': 29733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263979, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.542 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1550d4ce-d0c7-4168-b23d-57e9f8bb0caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.572 254824 DEBUG nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.573 254824 DEBUG oslo_concurrency.lockutils [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.574 254824 DEBUG nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.574 254824 WARNING nova.compute.manager [req-22c5c3c8-6372-4d62-ba1d-fed2bca51b11 req-6a37efcc-a404-46d1-9412-5a8977ce1ae2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.609 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[81595e81-9c6b-45b5-b8b9-cc9851017791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb700d432-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.611 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb700d432-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 NetworkManager[48882]: <info>  [1765015679.6141] manager: (tapb700d432-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.613 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 kernel: tapb700d432-e0: entered promiscuous mode
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.616 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.617 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb700d432-e0, col_values=(('external_ids', {'iface-id': '3214dd51-8339-49df-a992-3256b03ff074'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:07:59 compute-0 ovn_controller[152417]: 2025-12-06T10:07:59Z|00047|binding|INFO|Releasing lport 3214dd51-8339-49df-a992-3256b03ff074 from this chassis (sb_readonly=0)
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.618 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 nova_compute[254819]: 2025-12-06 10:07:59.632 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.632 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.633 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2a8fc7-22b4-406c-a586-a6ac6bc90586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.634 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-b700d432-ed1c-4e29-8f64-6e35196305aa
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/b700d432-ed1c-4e29-8f64-6e35196305aa.pid.haproxy
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID b700d432-ed1c-4e29-8f64-6e35196305aa
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:07:59 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:07:59.634 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'env', 'PROCESS_TAG=haproxy-b700d432-ed1c-4e29-8f64-6e35196305aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b700d432-ed1c-4e29-8f64-6e35196305aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:08:00 compute-0 podman[264011]: 2025-12-06 10:08:00.016531059 +0000 UTC m=+0.049638359 container create 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:08:00 compute-0 systemd[1]: Started libpod-conmon-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope.
Dec 06 10:08:00 compute-0 podman[264011]: 2025-12-06 10:07:59.988614576 +0000 UTC m=+0.021721876 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:08:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab7a22314b67a8e06bacab4c25c79547be2603d131e46433f9dadedd7c6018f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:00 compute-0 podman[264011]: 2025-12-06 10:08:00.124719087 +0000 UTC m=+0.157826407 container init 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:08:00 compute-0 podman[264011]: 2025-12-06 10:08:00.132803795 +0000 UTC m=+0.165911085 container start 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:08:00 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : New worker (264031) forked
Dec 06 10:08:00 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : Loading success.
Dec 06 10:08:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:00 compute-0 nova_compute[254819]: 2025-12-06 10:08:00.508 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:00 compute-0 nova_compute[254819]: 2025-12-06 10:08:00.615 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port bf396b58-3b48-44ae-92bd-e71275c9883c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:08:00 compute-0 nova_compute[254819]: 2025-12-06 10:08:00.616 254824 DEBUG nova.network.neutron [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:00 compute-0 ceph-mon[74327]: pgmap v804: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:08:00 compute-0 nova_compute[254819]: 2025-12-06 10:08:00.631 254824 DEBUG oslo_concurrency.lockutils [req-52f0b927-333c-4182-a821-c425b0174b97 req-9ca6fa0b-454f-4f84-89ab-e31d0dab0d0f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:08:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:08:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:00 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.388 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.389 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.405 254824 DEBUG nova.objects.instance [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.424 254824 DEBUG nova.virt.libvirt.vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.425 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.425 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.429 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.431 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.434 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Attempting to detach device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.434 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:56:e3"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <target dev="tapbf396b58-3b"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </interface>
Dec 06 10:08:01 compute-0 nova_compute[254819]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.440 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.443 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <name>instance-00000003</name>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <system>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </system>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <os>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </os>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <features>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </features>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:6c:29:20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='tapa7f5880e-0f'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:9c:56:e3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='tapbf396b58-3b'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='net1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </target>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </console>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <video>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </video>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </domain>
Dec 06 10:08:01 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.443 254824 INFO nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the persistent domain config.
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.444 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] (1/8): Attempting to detach device tapbf396b58-3b with device alias net1 from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.444 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:56:e3"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <target dev="tapbf396b58-3b"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </interface>
Dec 06 10:08:01 compute-0 nova_compute[254819]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:08:01 compute-0 podman[264041]: 2025-12-06 10:08:01.474310094 +0000 UTC m=+0.089155916 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:08:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec 06 10:08:01 compute-0 kernel: tapbf396b58-3b (unregistering): left promiscuous mode
Dec 06 10:08:01 compute-0 NetworkManager[48882]: <info>  [1765015681.5483] device (tapbf396b58-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:08:01 compute-0 ovn_controller[152417]: 2025-12-06T10:08:01Z|00048|binding|INFO|Releasing lport bf396b58-3b48-44ae-92bd-e71275c9883c from this chassis (sb_readonly=0)
Dec 06 10:08:01 compute-0 ovn_controller[152417]: 2025-12-06T10:08:01Z|00049|binding|INFO|Setting lport bf396b58-3b48-44ae-92bd-e71275c9883c down in Southbound
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.551 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 ovn_controller[152417]: 2025-12-06T10:08:01Z|00050|binding|INFO|Removing iface tapbf396b58-3b ovn-installed in OVS
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.561 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:e3 10.100.0.20'], port_security=['fa:16:3e:9c:56:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b700d432-ed1c-4e29-8f64-6e35196305aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e1a9f4d-accf-4c87-b819-872eff5f1a0b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=bf396b58-3b48-44ae-92bd-e71275c9883c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.561 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Received event <DeviceRemovedEvent: 1765015681.5615783, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.564 254824 DEBUG nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Start waiting for the detach event from libvirt for device tapbf396b58-3b with device alias net1 for instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.564 162267 INFO neutron.agent.ovn.metadata.agent [-] Port bf396b58-3b48-44ae-92bd-e71275c9883c in datapath b700d432-ed1c-4e29-8f64-6e35196305aa unbound from our chassis
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.564 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.566 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b700d432-ed1c-4e29-8f64-6e35196305aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.567 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <name>instance-00000003</name>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:07:59</nova:creationTime>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:port uuid="bf396b58-3b48-44ae-92bd-e71275c9883c">
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <system>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </system>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <os>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </os>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <features>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </features>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.567 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c0328f30-4bb6-47e6-950b-ffa2ce7dfd2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.568 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa namespace which is not needed anymore
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:6c:29:20'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target dev='tapa7f5880e-0f'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       </target>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </console>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <video>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </video>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </domain>
Dec 06 10:08:01 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.567 254824 INFO nova.virt.libvirt.driver [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tapbf396b58-3b from instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686 from the live domain config.
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.virt.libvirt.vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.568 254824 DEBUG nova.network.os_vif_util [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.569 254824 DEBUG os_vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.570 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.570 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf396b58-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.571 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.575 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.579 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.581 254824 INFO os_vif [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.582 254824 DEBUG nova.virt.libvirt.guest [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:01 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:01 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:01 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:01 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:01 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:08:01 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : haproxy version is 2.8.14-c23fe91
Dec 06 10:08:01 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [NOTICE]   (264029) : path to executable is /usr/sbin/haproxy
Dec 06 10:08:01 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [WARNING]  (264029) : Exiting Master process...
Dec 06 10:08:01 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [ALERT]    (264029) : Current worker (264031) exited with code 143 (Terminated)
Dec 06 10:08:01 compute-0 neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa[264025]: [WARNING]  (264029) : All workers exited. Exiting... (0)
Dec 06 10:08:01 compute-0 systemd[1]: libpod-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope: Deactivated successfully.
Dec 06 10:08:01 compute-0 podman[264084]: 2025-12-06 10:08:01.698064068 +0000 UTC m=+0.042897619 container died 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.700 254824 DEBUG nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.702 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.702 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 DEBUG oslo_concurrency.lockutils [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 DEBUG nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.703 254824 WARNING nova.compute.manager [req-72e202e3-7b1c-4306-b46b-b0ffbe896139 req-5f92e704-b652-44d4-9102-6732d7684129 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.
Dec 06 10:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33-userdata-shm.mount: Deactivated successfully.
Dec 06 10:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab7a22314b67a8e06bacab4c25c79547be2603d131e46433f9dadedd7c6018f-merged.mount: Deactivated successfully.
Dec 06 10:08:01 compute-0 podman[264084]: 2025-12-06 10:08:01.754543041 +0000 UTC m=+0.099376592 container cleanup 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:08:01 compute-0 systemd[1]: libpod-conmon-6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33.scope: Deactivated successfully.
Dec 06 10:08:01 compute-0 podman[264113]: 2025-12-06 10:08:01.817911509 +0000 UTC m=+0.042906428 container remove 6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.823 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[17991518-929d-4883-8980-b3aad9353719]: (4, ('Sat Dec  6 10:08:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa (6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33)\n6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33\nSat Dec  6 10:08:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa (6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33)\n6ca7b65c6b94bde9cc2dd559902f688bc8af04ffa8b5827278d645f0ad840d33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.824 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9578d6-d0d9-4cca-85eb-1512e994a9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.825 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb700d432-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.827 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 kernel: tapb700d432-e0: left promiscuous mode
Dec 06 10:08:01 compute-0 nova_compute[254819]: 2025-12-06 10:08:01.841 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.843 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[54ad943f-bb2e-46db-8adb-60c9e02a9f7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.862 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0d6f54-b53d-4560-96c5-4af2c9c3b7d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.863 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb7f87d-a120-4761-b10d-d49d003afc6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.877 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9841695b-f082-45a5-adde-6a86475a9463]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 404985, 'reachable_time': 15469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264129, 'error': None, 'target': 'ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.880 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b700d432-ed1c-4e29-8f64-6e35196305aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:08:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:01.880 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa3b9fb-dcae-4f23-a91e-1f98950be44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:01 compute-0 systemd[1]: run-netns-ovnmeta\x2db700d432\x2ded1c\x2d4e29\x2d8f64\x2d6e35196305aa.mount: Deactivated successfully.
Dec 06 10:08:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:02 compute-0 nova_compute[254819]: 2025-12-06 10:08:02.629 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:02 compute-0 nova_compute[254819]: 2025-12-06 10:08:02.630 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:02 compute-0 nova_compute[254819]: 2025-12-06 10:08:02.630 254824 DEBUG nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:08:02 compute-0 ceph-mon[74327]: pgmap v805: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec 06 10:08:02 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:02 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Dec 06 10:08:03 compute-0 ovn_controller[152417]: 2025-12-06T10:08:03Z|00051|binding|INFO|Releasing lport 614c688d-e8cc-4f61-86da-0aa3c3ee7fd1 from this chassis (sb_readonly=0)
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.789 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.814 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.814 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 WARNING nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-unplugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.815 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG oslo_concurrency.lockutils [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 WARNING nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-bf396b58-3b48-44ae-92bd-e71275c9883c for instance with vm_state active and task_state None.
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 DEBUG nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-deleted-bf396b58-3b48-44ae-92bd-e71275c9883c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.816 254824 INFO nova.compute.manager [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Neutron deleted interface bf396b58-3b48-44ae-92bd-e71275c9883c; detaching it from the instance and deleting it from the info cache
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.817 254824 DEBUG nova.network.neutron [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.848 254824 DEBUG nova.objects.instance [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.914 254824 DEBUG nova.objects.instance [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'flavor' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.952 254824 DEBUG nova.virt.libvirt.vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.953 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.953 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.956 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.959 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <name>instance-00000003</name>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <system>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </system>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <os>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </os>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <features>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </features>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:6c:29:20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='tapa7f5880e-0f'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </target>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </console>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <video>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </video>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]: </domain>
Dec 06 10:08:03 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.961 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.966 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:e3"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbf396b58-3b"/></interface>not found in domain: <domain type='kvm' id='2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <name>instance-00000003</name>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <uuid>2ef62e22-52fc-44f3-9964-8dc9b3c20686</uuid>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:08:01</nova:creationTime>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <system>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='serial'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='uuid'>2ef62e22-52fc-44f3-9964-8dc9b3c20686</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </system>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <os>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </os>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <features>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </features>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk' index='2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/2ef62e22-52fc-44f3-9964-8dc9b3c20686_disk.config' index='1'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:6c:29:20'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target dev='tapa7f5880e-0f'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       </target>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686/console.log' append='off'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </console>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </input>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <video>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </video>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c237,c686</label>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c237,c686</imagelabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:08:03 compute-0 nova_compute[254819]: </domain>
Dec 06 10:08:03 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.967 254824 WARNING nova.virt.libvirt.driver [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Detaching interface fa:16:3e:9c:56:e3 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapbf396b58-3b' not found.
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.969 254824 DEBUG nova.virt.libvirt.vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.970 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "bf396b58-3b48-44ae-92bd-e71275c9883c", "address": "fa:16:3e:9c:56:e3", "network": {"id": "b700d432-ed1c-4e29-8f64-6e35196305aa", "bridge": "br-int", "label": "tempest-network-smoke--1192945462", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf396b58-3b", "ovs_interfaceid": "bf396b58-3b48-44ae-92bd-e71275c9883c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.971 254824 DEBUG nova.network.os_vif_util [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.971 254824 DEBUG os_vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.974 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.975 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf396b58-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.975 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.978 254824 INFO os_vif [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:e3,bridge_name='br-int',has_traffic_filtering=True,id=bf396b58-3b48-44ae-92bd-e71275c9883c,network=Network(b700d432-ed1c-4e29-8f64-6e35196305aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf396b58-3b')
Dec 06 10:08:03 compute-0 nova_compute[254819]: 2025-12-06 10:08:03.979 254824 DEBUG nova.virt.libvirt.guest [req-d5f3ab94-7021-4442-abb5-1b27eef2404e req-22acac04-f57b-4533-a4e8-72d332eabde4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-1205802956</nova:name>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:08:03</nova:creationTime>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     <nova:port uuid="a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7">
Dec 06 10:08:03 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 10:08:03 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:08:03 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:08:03 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:08:03 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:08:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a40034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.508 254824 INFO nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Port bf396b58-3b48-44ae-92bd-e71275c9883c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.509 254824 DEBUG nova.network.neutron [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.532 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.574 254824 DEBUG oslo_concurrency.lockutils [None req-dc5f9ffd-4751-4397-a315-7e306ced7630 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-2ef62e22-52fc-44f3-9964-8dc9b3c20686-bf396b58-3b48-44ae-92bd-e71275c9883c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:04 compute-0 ceph-mon[74327]: pgmap v806: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.846 254824 DEBUG nova.compute.manager [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.846 254824 DEBUG nova.compute.manager [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing instance network info cache due to event network-changed-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.847 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Refreshing network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.901 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.902 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.902 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.903 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.903 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.904 254824 INFO nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Terminating instance
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.905 254824 DEBUG nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:08:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:04 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:04 compute-0 kernel: tapa7f5880e-0f (unregistering): left promiscuous mode
Dec 06 10:08:04 compute-0 NetworkManager[48882]: <info>  [1765015684.9789] device (tapa7f5880e-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:04 compute-0 ovn_controller[152417]: 2025-12-06T10:08:04Z|00052|binding|INFO|Releasing lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 from this chassis (sb_readonly=0)
Dec 06 10:08:04 compute-0 ovn_controller[152417]: 2025-12-06T10:08:04Z|00053|binding|INFO|Setting lport a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 down in Southbound
Dec 06 10:08:04 compute-0 ovn_controller[152417]: 2025-12-06T10:08:04Z|00054|binding|INFO|Removing iface tapa7f5880e-0f ovn-installed in OVS
Dec 06 10:08:04 compute-0 nova_compute[254819]: 2025-12-06 10:08:04.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.992 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:29:20 10.100.0.12'], port_security=['fa:16:3e:6c:29:20 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2ef62e22-52fc-44f3-9964-8dc9b3c20686', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f18b54b7-70a3-4b32-8644-f822c2e837c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75f33c5-f6d1-4d65-a2b0-b56ec14fd7b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:08:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.993 162267 INFO neutron.agent.ovn.metadata.agent [-] Port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 in datapath 4d9eb8be-73ac-4cfc-8821-fb41b5868957 unbound from our chassis
Dec 06 10:08:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.995 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d9eb8be-73ac-4cfc-8821-fb41b5868957, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:08:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.996 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b42076e-3e88-4e2f-ac0d-691257f43848]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:04.996 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 namespace which is not needed anymore
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.008 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:05 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 06 10:08:05 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 15.042s CPU time.
Dec 06 10:08:05 compute-0 systemd-machined[216202]: Machine qemu-2-instance-00000003 terminated.
Dec 06 10:08:05 compute-0 kernel: tapa7f5880e-0f: entered promiscuous mode
Dec 06 10:08:05 compute-0 kernel: tapa7f5880e-0f (unregistering): left promiscuous mode
Dec 06 10:08:05 compute-0 NetworkManager[48882]: <info>  [1765015685.1315] manager: (tapa7f5880e-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Dec 06 10:08:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:05.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.138 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : haproxy version is 2.8.14-c23fe91
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [NOTICE]   (263737) : path to executable is /usr/sbin/haproxy
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : Exiting Master process...
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : Exiting Master process...
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [ALERT]    (263737) : Current worker (263739) exited with code 143 (Terminated)
Dec 06 10:08:05 compute-0 neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957[263732]: [WARNING]  (263737) : All workers exited. Exiting... (0)
Dec 06 10:08:05 compute-0 systemd[1]: libpod-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope: Deactivated successfully.
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.150 254824 INFO nova.virt.libvirt.driver [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Instance destroyed successfully.
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.151 254824 DEBUG nova.objects.instance [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 2ef62e22-52fc-44f3-9964-8dc9b3c20686 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:05 compute-0 podman[264158]: 2025-12-06 10:08:05.152769406 +0000 UTC m=+0.050995546 container died d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.165 254824 DEBUG nova.virt.libvirt.vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:07:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1205802956',display_name='tempest-TestNetworkBasicOps-server-1205802956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1205802956',id=3,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5T1qcHH05a9NmUaQjnoDRANzOfCWA0bQySUh/2laJiduU/bwXdkcdraO/GcO81J8j8CnPS5RyrjJyMRbGp/po0cthjI8Tgw893oNF7dd79URxvc2r73z8/7tKvZVwU9A==',key_name='tempest-TestNetworkBasicOps-2032054379',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-hrg57eo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:07:27Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=2ef62e22-52fc-44f3-9964-8dc9b3c20686,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.166 254824 DEBUG nova.network.os_vif_util [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.167 254824 DEBUG nova.network.os_vif_util [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.168 254824 DEBUG os_vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.169 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.169 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f5880e-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.174 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.182 254824 INFO os_vif [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:29:20,bridge_name='br-int',has_traffic_filtering=True,id=a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7,network=Network(4d9eb8be-73ac-4cfc-8821-fb41b5868957),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7f5880e-0f')
Dec 06 10:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb-userdata-shm.mount: Deactivated successfully.
Dec 06 10:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-96304682dba270089a316a6ea2c840eb8d50d3698a98881b517984b3b6c64718-merged.mount: Deactivated successfully.
Dec 06 10:08:05 compute-0 podman[264158]: 2025-12-06 10:08:05.196518805 +0000 UTC m=+0.094744915 container cleanup d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:08:05 compute-0 systemd[1]: libpod-conmon-d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb.scope: Deactivated successfully.
Dec 06 10:08:05 compute-0 podman[264211]: 2025-12-06 10:08:05.270586393 +0000 UTC m=+0.045748554 container remove d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.277 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[22a8a150-5c73-4ca6-9583-a4c9ec1370e7]: (4, ('Sat Dec  6 10:08:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 (d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb)\nd17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb\nSat Dec  6 10:08:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 (d17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb)\nd17cd1fc39d9acdee42e31c47c202c46b9385a0d9467a86eeebaee27ffb7dacb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.280 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5c068b25-b985-47fb-8fdb-51e96840c0c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.281 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d9eb8be-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.283 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 kernel: tap4d9eb8be-70: left promiscuous mode
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.304 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.304 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[af636a8d-5811-4bc1-9d3b-03994e3d5ab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.318 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[05cd4da6-296f-4ed6-a12b-a8c9529d808a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.319 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5651cf36-9559-4b54-9d6c-f643c54caa32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.341 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9b841453-aa53-489d-a8b8-10c4e63c5493]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401700, 'reachable_time': 24644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264228, 'error': None, 'target': 'ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d9eb8be\x2d73ac\x2d4cfc\x2d8821\x2dfb41b5868957.mount: Deactivated successfully.
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.344 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d9eb8be-73ac-4cfc-8821-fb41b5868957 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:08:05 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:05.345 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[82e3f770-e51f-4ac8-8979-b496287d009f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.510 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.659 254824 INFO nova.virt.libvirt.driver [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deleting instance files /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686_del
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.660 254824 INFO nova.virt.libvirt.driver [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deletion of /var/lib/nova/instances/2ef62e22-52fc-44f3-9964-8dc9b3c20686_del complete
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.722 254824 INFO nova.compute.manager [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 0.82 seconds to destroy the instance on the hypervisor.
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.723 254824 DEBUG oslo.service.loopingcall [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.724 254824 DEBUG nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:08:05 compute-0 nova_compute[254819]: 2025-12-06 10:08:05.724 254824 DEBUG nova.network.neutron [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:08:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:06 compute-0 podman[264231]: 2025-12-06 10:08:06.484503859 +0000 UTC m=+0.108032414 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:08:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.502 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updated VIF entry in instance network info cache for port a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.502 254824 DEBUG nova.network.neutron [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [{"id": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "address": "fa:16:3e:6c:29:20", "network": {"id": "4d9eb8be-73ac-4cfc-8821-fb41b5868957", "bridge": "br-int", "label": "tempest-network-smoke--165851366", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7f5880e-0f", "ovs_interfaceid": "a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.744 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.744 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-unplugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.745 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG oslo_concurrency.lockutils [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 DEBUG nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] No waiting events found dispatching network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:06 compute-0 nova_compute[254819]: 2025-12-06 10:08:06.746 254824 WARNING nova.compute.manager [req-5f908acc-58e9-4fec-aaa6-de67acc52ebe req-57ed3163-fcd8-4fb1-839c-a070e076a962 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received unexpected event network-vif-plugged-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 for instance with vm_state active and task_state deleting.
Dec 06 10:08:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:06 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:07 compute-0 sudo[264260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:07 compute-0 sudo[264260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:07 compute-0 sudo[264260]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:07 compute-0 sudo[264285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:08:07 compute-0 sudo[264285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:07.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:07 compute-0 ceph-mon[74327]: pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:08:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:07.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:08:07 compute-0 sudo[264285]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:08:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 KiB/s wr, 1 op/s
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:07 compute-0 nova_compute[254819]: 2025-12-06 10:08:07.982 254824 DEBUG oslo_concurrency.lockutils [req-61d9d951-d5e6-485c-aca1-236719b3219b req-08c57f5b-1416-4663-a89b-8f183405a302 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-2ef62e22-52fc-44f3-9964-8dc9b3c20686" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:08:07 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:07 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.004 254824 DEBUG nova.network.neutron [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.044 254824 INFO nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Took 2.32 seconds to deallocate network for instance.
Dec 06 10:08:08 compute-0 sudo[264343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:08 compute-0 sudo[264343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:08 compute-0 sudo[264343]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.075 254824 DEBUG nova.compute.manager [req-e30346fe-adb8-487b-b4a1-4f9156dff486 req-65ab3b57-c02c-4538-9e86-356080268524 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Received event network-vif-deleted-a7f5880e-0fb8-4f37-8a6c-7f0e8558dcf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:08 compute-0 sudo[264368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:08:08 compute-0 sudo[264368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.158 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.159 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:08 compute-0 ceph-mon[74327]: pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:08:08 compute-0 ceph-mon[74327]: pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.2 KiB/s wr, 1 op/s
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:08:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.220 254824 DEBUG oslo_concurrency.processutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.536711663 +0000 UTC m=+0.050634897 container create 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:08:08 compute-0 systemd[1]: Started libpod-conmon-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope.
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.51510121 +0000 UTC m=+0.029024494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093960658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.675 254824 DEBUG oslo_concurrency.processutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.683 254824 DEBUG nova.compute.provider_tree [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.715102523 +0000 UTC m=+0.229025777 container init 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.723390126 +0000 UTC m=+0.237313360 container start 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:08:08 compute-0 exciting_pare[264469]: 167 167
Dec 06 10:08:08 compute-0 systemd[1]: libpod-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope: Deactivated successfully.
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.74351616 +0000 UTC m=+0.257439514 container attach 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.744598859 +0000 UTC m=+0.258522113 container died 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 10:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-22e9c6ed9bc4d934fc25cd5d0d45183d4acb28a3bda17c8f6cb743c9e8d2015a-merged.mount: Deactivated successfully.
Dec 06 10:08:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:08.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:08:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.872 254824 DEBUG nova.scheduler.client.report [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:08:08 compute-0 podman[264452]: 2025-12-06 10:08:08.893835203 +0000 UTC m=+0.407758447 container remove 421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:08:08 compute-0 systemd[1]: libpod-conmon-421b4d94fe7a11282cfbdf6350d34019679cfb30257c7971a0d49cf7a5e78acc.scope: Deactivated successfully.
Dec 06 10:08:08 compute-0 nova_compute[254819]: 2025-12-06 10:08:08.911 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:08:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:08 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32a0002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:09 compute-0 nova_compute[254819]: 2025-12-06 10:08:09.106 254824 INFO nova.scheduler.client.report [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 2ef62e22-52fc-44f3-9964-8dc9b3c20686
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.10915497 +0000 UTC m=+0.080503422 container create 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:08:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.057147138 +0000 UTC m=+0.028495610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:09 compute-0 systemd[1]: Started libpod-conmon-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope.
Dec 06 10:08:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.225544259 +0000 UTC m=+0.196892731 container init 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.233183264 +0000 UTC m=+0.204531706 container start 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.236562016 +0000 UTC m=+0.207910468 container attach 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:08:09 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2093960658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:09 compute-0 nova_compute[254819]: 2025-12-06 10:08:09.464 254824 DEBUG oslo_concurrency.lockutils [None req-7ff82df7-7550-40dc-b57e-ce7ea15b5b1a 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "2ef62e22-52fc-44f3-9964-8dc9b3c20686" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:09 compute-0 brave_gates[264512]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:08:09 compute-0 brave_gates[264512]: --> All data devices are unavailable
Dec 06 10:08:09 compute-0 systemd[1]: libpod-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope: Deactivated successfully.
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.588759454 +0000 UTC m=+0.560107916 container died 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-57031c587cab0757f1bb17de45cc3e3869d1815b0ee325544d59078303554d3c-merged.mount: Deactivated successfully.
Dec 06 10:08:09 compute-0 podman[264495]: 2025-12-06 10:08:09.720561028 +0000 UTC m=+0.691909480 container remove 9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 10:08:09 compute-0 systemd[1]: libpod-conmon-9236001d1b94544adf7ed5cb5a358df2fc492966a0f6243bdb31683d583ac051.scope: Deactivated successfully.
Dec 06 10:08:09 compute-0 sudo[264368]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:09 compute-0 sudo[264543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:09 compute-0 sudo[264543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:09 compute-0 sudo[264543]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec 06 10:08:09 compute-0 sudo[264568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:08:09 compute-0 sudo[264568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:10 compute-0 nova_compute[254819]: 2025-12-06 10:08:10.173 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.273391067 +0000 UTC m=+0.027643186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.381678517 +0000 UTC m=+0.135930586 container create 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:10 compute-0 ceph-mon[74327]: pgmap v810: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec 06 10:08:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:10 compute-0 systemd[1]: Started libpod-conmon-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope.
Dec 06 10:08:10 compute-0 nova_compute[254819]: 2025-12-06 10:08:10.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.687103324 +0000 UTC m=+0.441355383 container init 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.697303339 +0000 UTC m=+0.451555378 container start 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.70105185 +0000 UTC m=+0.455303889 container attach 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 10:08:10 compute-0 cranky_hermann[264652]: 167 167
Dec 06 10:08:10 compute-0 systemd[1]: libpod-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope: Deactivated successfully.
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.703564498 +0000 UTC m=+0.457816567 container died 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b4937f64b7e19e49e1dfe4e879b86151bc905e579014dc530f3ba0fc51b16d-merged.mount: Deactivated successfully.
Dec 06 10:08:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:08:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:08:10 compute-0 podman[264634]: 2025-12-06 10:08:10.924150187 +0000 UTC m=+0.678402226 container remove 5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:08:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:10 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:10 compute-0 systemd[1]: libpod-conmon-5ca6b3defce53dd7a38ef22c1a724002719f66beddd2a2ebe4de11d8fd511c64.scope: Deactivated successfully.
Dec 06 10:08:10 compute-0 podman[264654]: 2025-12-06 10:08:10.997822354 +0000 UTC m=+0.414201182 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 06 10:08:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:11.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:11.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:11 compute-0 podman[264698]: 2025-12-06 10:08:11.181334233 +0000 UTC m=+0.085179339 container create 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:08:11 compute-0 podman[264698]: 2025-12-06 10:08:11.128049726 +0000 UTC m=+0.031894852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:11 compute-0 systemd[1]: Started libpod-conmon-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope.
Dec 06 10:08:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:11 compute-0 podman[264698]: 2025-12-06 10:08:11.283294652 +0000 UTC m=+0.187139758 container init 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 10:08:11 compute-0 podman[264698]: 2025-12-06 10:08:11.292699857 +0000 UTC m=+0.196544963 container start 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 06 10:08:11 compute-0 podman[264698]: 2025-12-06 10:08:11.323860927 +0000 UTC m=+0.227706073 container attach 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]: {
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:     "1": [
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:         {
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "devices": [
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "/dev/loop3"
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             ],
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "lv_name": "ceph_lv0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "lv_size": "21470642176",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "name": "ceph_lv0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "tags": {
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.cluster_name": "ceph",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.crush_device_class": "",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.encrypted": "0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.osd_id": "1",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.type": "block",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.vdo": "0",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:                 "ceph.with_tpm": "0"
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             },
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "type": "block",
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:             "vg_name": "ceph_vg0"
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:         }
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]:     ]
Dec 06 10:08:11 compute-0 xenodochial_spence[264714]: }
Dec 06 10:08:11 compute-0 systemd[1]: libpod-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope: Deactivated successfully.
Dec 06 10:08:11 compute-0 podman[264725]: 2025-12-06 10:08:11.651725459 +0000 UTC m=+0.025719535 container died 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74efe5d16a326fa79bcf23f81f6c6a5e37029b7baf856f9dd9ad2f1026d7b22-merged.mount: Deactivated successfully.
Dec 06 10:08:11 compute-0 podman[264725]: 2025-12-06 10:08:11.850060667 +0000 UTC m=+0.224054723 container remove 16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:11 compute-0 systemd[1]: libpod-conmon-16fc117aa10ea493852fecccc933a261fbf5156a03d2bbada5c6f4d2cc35444b.scope: Deactivated successfully.
Dec 06 10:08:11 compute-0 sudo[264568]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec 06 10:08:11 compute-0 sudo[264741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:11 compute-0 sudo[264741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:11 compute-0 sudo[264741]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 compute-0 sudo[264764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:08:12 compute-0 sudo[264764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:12 compute-0 sudo[264764]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 compute-0 sudo[264789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:08:12 compute-0 sudo[264789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:12 compute-0 ceph-mon[74327]: pgmap v811: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 33 op/s
Dec 06 10:08:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:12 compute-0 podman[264858]: 2025-12-06 10:08:12.557888635 +0000 UTC m=+0.024572674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:12 compute-0 podman[264858]: 2025-12-06 10:08:12.919809636 +0000 UTC m=+0.386493655 container create 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:08:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:12 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:12 compute-0 systemd[1]: Started libpod-conmon-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope.
Dec 06 10:08:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:13.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:13 compute-0 podman[264858]: 2025-12-06 10:08:13.020382968 +0000 UTC m=+0.487067007 container init 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:13 compute-0 podman[264858]: 2025-12-06 10:08:13.027009097 +0000 UTC m=+0.493693116 container start 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:08:13 compute-0 podman[264858]: 2025-12-06 10:08:13.0308283 +0000 UTC m=+0.497512319 container attach 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:08:13 compute-0 nervous_volhard[264876]: 167 167
Dec 06 10:08:13 compute-0 systemd[1]: libpod-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope: Deactivated successfully.
Dec 06 10:08:13 compute-0 podman[264858]: 2025-12-06 10:08:13.032931687 +0000 UTC m=+0.499615706 container died 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:08:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:13.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a6b7862ca183b57e2888201de853efa44462ccfd3479c4892ac80751aebdce-merged.mount: Deactivated successfully.
Dec 06 10:08:13 compute-0 podman[264858]: 2025-12-06 10:08:13.441318531 +0000 UTC m=+0.908002550 container remove 98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:08:13 compute-0 systemd[1]: libpod-conmon-98f3af7ab5415812974c9dbd82ee8de1627d005bb8365556b81a4a20c9b36f75.scope: Deactivated successfully.
Dec 06 10:08:13 compute-0 podman[264904]: 2025-12-06 10:08:13.662740232 +0000 UTC m=+0.100122091 container create 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:08:13 compute-0 podman[264904]: 2025-12-06 10:08:13.588673264 +0000 UTC m=+0.026055163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:13 compute-0 systemd[1]: Started libpod-conmon-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope.
Dec 06 10:08:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:13 compute-0 podman[264904]: 2025-12-06 10:08:13.923369951 +0000 UTC m=+0.360751900 container init 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:08:13 compute-0 podman[264904]: 2025-12-06 10:08:13.937512352 +0000 UTC m=+0.374894211 container start 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:13 compute-0 podman[264904]: 2025-12-06 10:08:13.941136929 +0000 UTC m=+0.378518808 container attach 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:08:14 compute-0 ceph-mon[74327]: pgmap v812: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280001070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:14 compute-0 lvm[264995]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:08:14 compute-0 lvm[264995]: VG ceph_vg0 finished
Dec 06 10:08:14 compute-0 vigorous_perlman[264921]: {}
Dec 06 10:08:14 compute-0 systemd[1]: libpod-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Deactivated successfully.
Dec 06 10:08:14 compute-0 systemd[1]: libpod-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Consumed 1.140s CPU time.
Dec 06 10:08:14 compute-0 podman[264904]: 2025-12-06 10:08:14.679543723 +0000 UTC m=+1.116925642 container died 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3af1b479af6573b073fea53032f56f695a422467639dc0961447056176a733-merged.mount: Deactivated successfully.
Dec 06 10:08:14 compute-0 podman[264904]: 2025-12-06 10:08:14.910537002 +0000 UTC m=+1.347918861 container remove 432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:08:14 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:14.916 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:08:14 compute-0 nova_compute[254819]: 2025-12-06 10:08:14.917 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:14 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:14.919 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:08:14 compute-0 systemd[1]: libpod-conmon-432a28a73760208f80375a2d5d0562e89578e74c4378e582c9038be650106ae9.scope: Deactivated successfully.
Dec 06 10:08:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:14 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:14 compute-0 sudo[264789]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:08:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:08:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:15.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:15 compute-0 nova_compute[254819]: 2025-12-06 10:08:15.174 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:15 compute-0 sudo[265011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:08:15 compute-0 sudo[265011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:15 compute-0 sudo[265011]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:15 compute-0 nova_compute[254819]: 2025-12-06 10:08:15.565 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:15 compute-0 nova_compute[254819]: 2025-12-06 10:08:15.691 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:15 compute-0 nova_compute[254819]: 2025-12-06 10:08:15.810 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:08:16 compute-0 ceph-mon[74327]: pgmap v813: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:16 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:17.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:17.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:08:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:08:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:17.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:08:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:17 compute-0 ceph-mon[74327]: pgmap v814: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.0 KiB/s wr, 32 op/s
Dec 06 10:08:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:18 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 06 10:08:20 compute-0 nova_compute[254819]: 2025-12-06 10:08:20.148 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015685.1472943, 2ef62e22-52fc-44f3-9964-8dc9b3c20686 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:08:20 compute-0 nova_compute[254819]: 2025-12-06 10:08:20.149 254824 INFO nova.compute.manager [-] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] VM Stopped (Lifecycle Event)
Dec 06 10:08:20 compute-0 nova_compute[254819]: 2025-12-06 10:08:20.231 254824 DEBUG nova.compute.manager [None req-609a1ee6-6c9e-4245-8c64-7e88cf684358 - - - - - -] [instance: 2ef62e22-52fc-44f3-9964-8dc9b3c20686] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:08:20 compute-0 nova_compute[254819]: 2025-12-06 10:08:20.231 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:20 compute-0 ceph-mon[74327]: pgmap v815: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Dec 06 10:08:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:20 compute-0 nova_compute[254819]: 2025-12-06 10:08:20.567 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:08:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:08:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:20 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3288002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:21.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:21.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:21 compute-0 ceph-mon[74327]: pgmap v816: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3280003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:22 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:08:23
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', '.nfs']
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:08:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:23.921 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:08:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:23 compute-0 ceph-mon[74327]: pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:08:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:08:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:08:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:08:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:24 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:25.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:25 compute-0 nova_compute[254819]: 2025-12-06 10:08:25.261 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:25 compute-0 nova_compute[254819]: 2025-12-06 10:08:25.570 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:26 compute-0 ceph-mon[74327]: pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 06 10:08:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:26 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:27.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:27.278Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:27 compute-0 ceph-mon[74327]: pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:28.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:28 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:29 compute-0 ceph-mon[74327]: pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:30 compute-0 nova_compute[254819]: 2025-12-06 10:08:30.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:30 compute-0 nova_compute[254819]: 2025-12-06 10:08:30.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:30 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:31.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:31 compute-0 ceph-mon[74327]: pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.048 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.049 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.069 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:08:32 compute-0 sudo[265055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:08:32 compute-0 sudo[265055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:32 compute-0 sudo[265055]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.146 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.146 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.155 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.156 254824 INFO nova.compute.claims [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:08:32 compute-0 podman[265079]: 2025-12-06 10:08:32.226769477 +0000 UTC m=+0.077152951 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.272 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132300413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.712 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.719 254824 DEBUG nova.compute.provider_tree [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.739 254824 DEBUG nova.scheduler.client.report [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.771 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.771 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.842 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.843 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.872 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:08:32 compute-0 nova_compute[254819]: 2025-12-06 10:08:32.893 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:08:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:32 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1132300413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.005 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.006 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.006 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating image(s)
Dec 06 10:08:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.048 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.092 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.130 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.135 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.160 254824 DEBUG nova.policy [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:08:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.212 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.213 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.246 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.252 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 112440c2-8dcc-4a19-9d83-5489df97079a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.555 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 112440c2-8dcc-4a19-9d83-5489df97079a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.653 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.787 254824 DEBUG nova.objects.instance [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.807 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.808 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Ensure instance console log exists: /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.808 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.809 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:33 compute-0 nova_compute[254819]: 2025-12-06 10:08:33.809 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:08:34 compute-0 ceph-mon[74327]: pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.088 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Successfully created port: 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:08:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32800041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f328c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.715 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Successfully updated port: 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.728 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.804 254824 DEBUG nova.compute.manager [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.804 254824 DEBUG nova.compute.manager [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:08:34 compute-0 nova_compute[254819]: 2025-12-06 10:08:34.805 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:34 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f32880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:35 compute-0 nova_compute[254819]: 2025-12-06 10:08:35.021 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:08:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:35.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:35 compute-0 nova_compute[254819]: 2025-12-06 10:08:35.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:35 compute-0 nova_compute[254819]: 2025-12-06 10:08:35.575 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:35 compute-0 ceph-mon[74327]: pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.117 254824 DEBUG nova.network.neutron [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.140 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.141 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance network_info: |[{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.142 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.142 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.147 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start _get_guest_xml network_info=[{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.152 254824 WARNING nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.158 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.159 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.162 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.163 254824 DEBUG nova.virt.libvirt.host [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.164 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.164 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.165 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.166 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.166 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.167 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.168 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.168 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.169 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.169 254824 DEBUG nova.virt.hardware [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.176 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:08:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:08:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198037829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.693 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.725 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:36 compute-0 nova_compute[254819]: 2025-12-06 10:08:36.731 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:36 compute-0 kernel: ganesha.nfsd[263832]: segfault at 50 ip 00007f335a2f032e sp 00007f3312ffc210 error 4 in libntirpc.so.5.8[7f335a2d5000+2c000] likely on CPU 4 (core 0, socket 4)
Dec 06 10:08:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:08:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[262266]: 06/12/2025 10:08:36 : epoch 69340037 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f327c003c70 fd 39 proxy ignored for local
Dec 06 10:08:36 compute-0 systemd[1]: Started Process Core Dump (PID 265354/UID 0).
Dec 06 10:08:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/198037829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:08:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:37.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:37 compute-0 podman[265355]: 2025-12-06 10:08:37.114728505 +0000 UTC m=+0.107203952 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 06 10:08:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:08:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332401644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.242 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.244 254824 DEBUG nova.virt.libvirt.vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:08:32Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.244 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.245 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.246 254824 DEBUG nova.objects.instance [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.262 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <uuid>112440c2-8dcc-4a19-9d83-5489df97079a</uuid>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <name>instance-00000004</name>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-609462386</nova:name>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:08:36</nova:creationTime>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <nova:port uuid="2d0118f7-94f6-43f6-a67f-28e0faf9c3ae">
Dec 06 10:08:37 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <system>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="serial">112440c2-8dcc-4a19-9d83-5489df97079a</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="uuid">112440c2-8dcc-4a19-9d83-5489df97079a</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </system>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <os>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </os>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <features>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </features>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/112440c2-8dcc-4a19-9d83-5489df97079a_disk">
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/112440c2-8dcc-4a19-9d83-5489df97079a_disk.config">
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:08:37 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:b4:37:0e"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <target dev="tap2d0118f7-94"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/console.log" append="off"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <video>
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </video>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:08:37 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:08:37 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:08:37 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:08:37 compute-0 nova_compute[254819]: </domain>
Dec 06 10:08:37 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Preparing to wait for external event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.264 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG nova.virt.libvirt.vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:08:32Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.265 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.266 254824 DEBUG nova.network.os_vif_util [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.266 254824 DEBUG os_vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.267 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.267 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.268 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d0118f7-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.271 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d0118f7-94, col_values=(('external_ids', {'iface-id': '2d0118f7-94f6-43f6-a67f-28e0faf9c3ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:37:0e', 'vm-uuid': '112440c2-8dcc-4a19-9d83-5489df97079a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:37 compute-0 NetworkManager[48882]: <info>  [1765015717.2739] manager: (tap2d0118f7-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.275 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:08:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:37.279Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:08:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:37.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.285 254824 INFO os_vif [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94')
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.343 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:b4:37:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.344 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Using config drive
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.369 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.631 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.632 254824 DEBUG nova.network.neutron [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.647 254824 DEBUG oslo_concurrency.lockutils [req-e9ca9422-4334-410b-8d77-338b149a148c req-b2a025e6-3017-4194-a357-1d80c255e50c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.684 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Creating config drive at /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.688 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprc8iw27 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.824 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprc8iw27" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.853 254824 DEBUG nova.storage.rbd_utils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:08:37 compute-0 nova_compute[254819]: 2025-12-06 10:08:37.858 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:38.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:08:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:08:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3332401644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:08:39 compute-0 ceph-mon[74327]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:08:39 compute-0 systemd-coredump[265356]: Process 262270 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f335a2f032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:08:39 compute-0 nova_compute[254819]: 2025-12-06 10:08:39.958 254824 DEBUG oslo_concurrency.processutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config 112440c2-8dcc-4a19-9d83-5489df97079a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:39 compute-0 nova_compute[254819]: 2025-12-06 10:08:39.960 254824 INFO nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deleting local config drive /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a/disk.config because it was imported into RBD.
Dec 06 10:08:40 compute-0 kernel: tap2d0118f7-94: entered promiscuous mode
Dec 06 10:08:40 compute-0 systemd[1]: systemd-coredump@9-265354-0.service: Deactivated successfully.
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.0541] manager: (tap2d0118f7-94): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Dec 06 10:08:40 compute-0 systemd[1]: systemd-coredump@9-265354-0.service: Consumed 1.214s CPU time.
Dec 06 10:08:40 compute-0 ovn_controller[152417]: 2025-12-06T10:08:40Z|00055|binding|INFO|Claiming lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for this chassis.
Dec 06 10:08:40 compute-0 ovn_controller[152417]: 2025-12-06T10:08:40Z|00056|binding|INFO|2d0118f7-94f6-43f6-a67f-28e0faf9c3ae: Claiming fa:16:3e:b4:37:0e 10.100.0.5
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.101 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.115 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:37:0e 10.100.0.5'], port_security=['fa:16:3e:b4:37:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '112440c2-8dcc-4a19-9d83-5489df97079a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3027a471-10b5-4a61-b09a-0f0e6072fde1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=611cd505-2a02-4d45-a906-bd97d1447953, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.116 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae in datapath dccd9941-4f3e-4086-b9cd-651d8e99e8ec bound to our chassis
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.118 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dccd9941-4f3e-4086-b9cd-651d8e99e8ec
Dec 06 10:08:40 compute-0 systemd-udevd[265471]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:08:40 compute-0 systemd-machined[216202]: New machine qemu-3-instance-00000004.
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.135 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[761d5912-866c-498b-a211-e5a6727da3cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.137 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdccd9941-41 in ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.139 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdccd9941-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.139 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36e6d1b0-6033-4712-9612-34cb9fa9ea3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.140 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8303b8e0-9d22-4a32-aa2c-6fd960c961a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.1462] device (tap2d0118f7-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.1472] device (tap2d0118f7-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.151 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f91b5c-2592-49b5-9437-bcb28e9b7fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 podman[265463]: 2025-12-06 10:08:40.157384788 +0000 UTC m=+0.041415457 container died f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:08:40 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.179 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0efe9734-5e51-4591-88da-98170b446a4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.186 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a06d1e4ef00f96bb3b2a4a87962e3ae00f248f55a7d8371c9603028aaf9dae7-merged.mount: Deactivated successfully.
Dec 06 10:08:40 compute-0 ovn_controller[152417]: 2025-12-06T10:08:40Z|00057|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae ovn-installed in OVS
Dec 06 10:08:40 compute-0 ovn_controller[152417]: 2025-12-06T10:08:40Z|00058|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae up in Southbound
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.191 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 podman[265463]: 2025-12-06 10:08:40.21157909 +0000 UTC m=+0.095609749 container remove f2727a14c8c776c3cd7e91838d6e5e786e1c034f81a93b6d591f7a9fc5c736a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.214 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[ea429b55-838d-4006-b764-9193269bfaec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.219 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0f9355-5c76-4fda-b6fc-c4ff649e0112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.2207] manager: (tapdccd9941-40): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Dec 06 10:08:40 compute-0 systemd-udevd[265479]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.261 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b635b53a-b521-47a7-a4e7-73d2d60d7da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.265 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[873e958a-7869-47ad-af39-8b22d1686264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.2935] device (tapdccd9941-40): carrier: link connected
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.297 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[319e5e10-1cfa-4116-9e24-e189a3835c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.317 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7022e6-b44e-4606-95d7-9af060abd501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdccd9941-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b1:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409077, 'reachable_time': 32278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265529, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.333 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b84d9611-a72f-4983-a638-c93825fe4c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:b1b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409077, 'tstamp': 409077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265540, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.354 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e96377be-f281-449c-b2cc-8c61b1c64c67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdccd9941-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:b1:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409077, 'reachable_time': 32278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265543, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:08:40 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.354s CPU time.
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.383 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[30ee7516-504a-4278-9291-d9883ec1611d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.424 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a455c1f-ce0a-4702-839e-f1e206e965e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.425 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdccd9941-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.425 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.426 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdccd9941-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.427 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 NetworkManager[48882]: <info>  [1765015720.4280] manager: (tapdccd9941-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec 06 10:08:40 compute-0 kernel: tapdccd9941-40: entered promiscuous mode
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.430 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdccd9941-40, col_values=(('external_ids', {'iface-id': '5c84c258-875b-4b17-864b-0a3a247ec558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.431 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 ovn_controller[152417]: 2025-12-06T10:08:40Z|00059|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.433 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.434 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[471558ea-9691-42ea-96f0-20d061927c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.435 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-dccd9941-4f3e-4086-b9cd-651d8e99e8ec
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.pid.haproxy
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID dccd9941-4f3e-4086-b9cd-651d8e99e8ec
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:08:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:40.435 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'env', 'PROCESS_TAG=haproxy-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dccd9941-4f3e-4086-b9cd-651d8e99e8ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.446 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.578 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.774 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:40 compute-0 podman[265617]: 2025-12-06 10:08:40.796108614 +0000 UTC m=+0.055524618 container create b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.796 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015720.7734509, 112440c2-8dcc-4a19-9d83-5489df97079a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.797 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Started (Lifecycle Event)
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.818 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.823 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015720.7739744, 112440c2-8dcc-4a19-9d83-5489df97079a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.824 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Paused (Lifecycle Event)
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.841 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.847 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:08:40 compute-0 systemd[1]: Started libpod-conmon-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope.
Dec 06 10:08:40 compute-0 podman[265617]: 2025-12-06 10:08:40.766295621 +0000 UTC m=+0.025711625 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:08:40 compute-0 nova_compute[254819]: 2025-12-06 10:08:40.870 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:08:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb19a4954b41a05ededd94f9209e0e9572500e71415f9c5c428921ac41b73efd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:40 compute-0 podman[265617]: 2025-12-06 10:08:40.903368488 +0000 UTC m=+0.162784502 container init b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:08:40 compute-0 podman[265617]: 2025-12-06 10:08:40.909572135 +0000 UTC m=+0.168988129 container start b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:08:40 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:40 compute-0 ceph-mon[74327]: pgmap v825: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:08:40 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : New worker (265658) forked
Dec 06 10:08:40 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : Loading success.
Dec 06 10:08:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219827938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.225 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG nova.compute.manager [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.238 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG oslo_concurrency.lockutils [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG nova.compute.manager [req-0d8ab876-8f13-4fbf-8c51-db2005cbb24a req-336808ce-6499-4000-81e2-6d4a010b67de d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Processing event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.239 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.242 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015721.2422922, 112440c2-8dcc-4a19-9d83-5489df97079a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.242 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Resumed (Lifecycle Event)
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.246 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.253 254824 INFO nova.virt.libvirt.driver [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance spawned successfully.
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.255 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.263 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.267 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.280 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.281 254824 DEBUG nova.virt.libvirt.driver [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.288 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.338 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.339 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.354 254824 INFO nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 8.35 seconds to spawn the instance on the hypervisor.
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.356 254824 DEBUG nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:08:41 compute-0 podman[265669]: 2025-12-06 10:08:41.397961485 +0000 UTC m=+0.108354073 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.418 254824 INFO nova.compute.manager [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 9.29 seconds to build instance.
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.433 254824 DEBUG oslo_concurrency.lockutils [None req-6bf83856-7801-4aff-8483-fc1e22d37b14 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.537 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.538 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=59.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.538 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.539 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.826 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 112440c2-8dcc-4a19-9d83-5489df97079a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.827 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.828 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:08:41 compute-0 nova_compute[254819]: 2025-12-06 10:08:41.865 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:08:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1219827938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:41 compute-0 ceph-mon[74327]: pgmap v826: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.310 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139792515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.417 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.425 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.443 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.465 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:08:42 compute-0 nova_compute[254819]: 2025-12-06 10:08:42.466 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2139792515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3506121761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:43.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.339 254824 DEBUG nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.340 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG oslo_concurrency.lockutils [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.341 254824 DEBUG nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.342 254824 WARNING nova.compute.manager [req-5416705b-7e3a-4f64-bdbd-cf57d3f42dbc req-bad830cb-b182-4af1-8da4-870047e7f1c0 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received unexpected event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with vm_state active and task_state None.
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.467 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.487 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.488 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:08:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.939 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.940 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.941 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:08:43 compute-0 nova_compute[254819]: 2025-12-06 10:08:43.942 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:08:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1405804968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:43 compute-0 ceph-mon[74327]: pgmap v827: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:08:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100844 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:08:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:45.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.579 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.825 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.847 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.848 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.849 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.850 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:45 compute-0 nova_compute[254819]: 2025-12-06 10:08:45.850 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:08:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:45 compute-0 ceph-mon[74327]: pgmap v828: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4086049669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:08:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4086049669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.067 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:46 compute-0 ovn_controller[152417]: 2025-12-06T10:08:46Z|00060|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec 06 10:08:46 compute-0 NetworkManager[48882]: <info>  [1765015726.0690] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec 06 10:08:46 compute-0 NetworkManager[48882]: <info>  [1765015726.0701] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 06 10:08:46 compute-0 ovn_controller[152417]: 2025-12-06T10:08:46Z|00061|binding|INFO|Releasing lport 5c84c258-875b-4b17-864b-0a3a247ec558 from this chassis (sb_readonly=0)
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.110 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.116 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.341 254824 DEBUG nova.compute.manager [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.342 254824 DEBUG nova.compute.manager [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.342 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.343 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.343 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:08:46 compute-0 nova_compute[254819]: 2025-12-06 10:08:46.844 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:47.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:47.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:47 compute-0 nova_compute[254819]: 2025-12-06 10:08:47.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:48 compute-0 nova_compute[254819]: 2025-12-06 10:08:48.360 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:08:48 compute-0 nova_compute[254819]: 2025-12-06 10:08:48.362 254824 DEBUG nova.network.neutron [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:08:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:48.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2148744341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1552448253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:08:49 compute-0 nova_compute[254819]: 2025-12-06 10:08:49.020 254824 DEBUG oslo_concurrency.lockutils [req-76f1ed86-0953-4f15-b783-12ebe200f8c3 req-2ff51087-5146-41fc-bed4-9f5d59195de2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:08:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:50 compute-0 ceph-mon[74327]: pgmap v829: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:50 compute-0 ceph-mon[74327]: pgmap v830: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 10:08:50 compute-0 nova_compute[254819]: 2025-12-06 10:08:50.582 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:50 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 10.
Dec 06 10:08:50 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:08:50 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.354s CPU time.
Dec 06 10:08:50 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 10:08:50 compute-0 podman[265770]: 2025-12-06 10:08:50.863643126 +0000 UTC m=+0.049119326 container create af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:08:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:08:50 compute-0 podman[265770]: 2025-12-06 10:08:50.836881794 +0000 UTC m=+0.022357994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:08:50 compute-0 podman[265770]: 2025-12-06 10:08:50.940354284 +0000 UTC m=+0.125830504 container init af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:08:50 compute-0 podman[265770]: 2025-12-06 10:08:50.947684182 +0000 UTC m=+0.133160372 container start af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:08:50 compute-0 bash[265770]: af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d
Dec 06 10:08:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:50 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 10:08:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:50 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 10:08:50 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 10:08:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:51 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:08:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:08:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:08:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:08:52 compute-0 ceph-mon[74327]: pgmap v831: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:08:52 compute-0 sudo[265831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:08:52 compute-0 sudo[265831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:08:52 compute-0 sudo[265831]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:52 compute-0 nova_compute[254819]: 2025-12-06 10:08:52.319 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:08:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:53.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:08:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:08:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Dec 06 10:08:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:08:53 compute-0 ceph-mon[74327]: pgmap v832: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Dec 06 10:08:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:53 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:08:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:08:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:08:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:55.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:55 compute-0 ovn_controller[152417]: 2025-12-06T10:08:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:37:0e 10.100.0.5
Dec 06 10:08:55 compute-0 ovn_controller[152417]: 2025-12-06T10:08:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:37:0e 10.100.0.5
Dec 06 10:08:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:55 compute-0 nova_compute[254819]: 2025-12-06 10:08:55.584 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 10:08:56 compute-0 ceph-mon[74327]: pgmap v833: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 10:08:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:57 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:08:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:08:57 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:08:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:08:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:08:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:57.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:08:57 compute-0 nova_compute[254819]: 2025-12-06 10:08:57.323 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:08:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 10:08:58 compute-0 ceph-mon[74327]: pgmap v834: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 10:08:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:08:58.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:08:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:08:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:08:59.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:08:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:08:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:08:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:08:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:09:00 compute-0 ceph-mon[74327]: pgmap v835: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:09:00 compute-0 nova_compute[254819]: 2025-12-06 10:09:00.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:00 compute-0 nova_compute[254819]: 2025-12-06 10:09:00.786 254824 INFO nova.compute.manager [None req-2db727e9-e55e-4849-be94-b6f7817bb971 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Get console output
Dec 06 10:09:00 compute-0 nova_compute[254819]: 2025-12-06 10:09:00.792 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:09:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:09:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:09:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:01.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:09:02 compute-0 ceph-mon[74327]: pgmap v836: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.327 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:02 compute-0 podman[265866]: 2025-12-06 10:09:02.445923949 +0000 UTC m=+0.075573949 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.492 254824 DEBUG nova.compute.manager [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG nova.compute.manager [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing instance network info cache due to event network-changed-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.493 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:09:02 compute-0 nova_compute[254819]: 2025-12-06 10:09:02.494 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Refreshing network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:09:02 compute-0 anacron[4456]: Job `cron.monthly' started
Dec 06 10:09:02 compute-0 anacron[4456]: Job `cron.monthly' terminated
Dec 06 10:09:02 compute-0 anacron[4456]: Normal exit (3 jobs run)
Dec 06 10:09:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:03.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 10:09:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:03 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:09:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:03 compute-0 nova_compute[254819]: 2025-12-06 10:09:03.555 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated VIF entry in instance network info cache for port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:09:03 compute-0 nova_compute[254819]: 2025-12-06 10:09:03.556 254824 DEBUG nova.network.neutron [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:09:03 compute-0 nova_compute[254819]: 2025-12-06 10:09:03.585 254824 DEBUG oslo_concurrency.lockutils [req-b650c24b-0d01-424c-b1b9-4a6aea98c31e req-1220055d-f909-4281-b22b-305c08155eaa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:09:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 10:09:04 compute-0 ceph-mon[74327]: pgmap v837: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 10:09:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e24000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:04 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:05.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:05 compute-0 nova_compute[254819]: 2025-12-06 10:09:05.590 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec 06 10:09:06 compute-0 ceph-mon[74327]: pgmap v838: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec 06 10:09:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100906 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:09:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:06 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:07.283Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:07.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:07 compute-0 nova_compute[254819]: 2025-12-06 10:09:07.331 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:07 compute-0 podman[265908]: 2025-12-06 10:09:07.457373848 +0000 UTC m=+0.087852660 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:09:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec 06 10:09:08 compute-0 ceph-mon[74327]: pgmap v839: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 107 KiB/s wr, 38 op/s
Dec 06 10:09:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e180013d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:09:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:08 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:09.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 113 KiB/s wr, 38 op/s
Dec 06 10:09:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:10 compute-0 nova_compute[254819]: 2025-12-06 10:09:10.592 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec 06 10:09:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec 06 10:09:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:10 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:11 compute-0 ceph-mon[74327]: pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 113 KiB/s wr, 38 op/s
Dec 06 10:09:11 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/465585946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:11.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:11.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec 06 10:09:12 compute-0 nova_compute[254819]: 2025-12-06 10:09:12.335 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:12 compute-0 sudo[265939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:09:12 compute-0 sudo[265939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:12 compute-0 sudo[265939]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:12 compute-0 podman[265963]: 2025-12-06 10:09:12.418653487 +0000 UTC m=+0.057129798 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:09:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:12 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:13 compute-0 ceph-mon[74327]: pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec 06 10:09:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:13.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 06 10:09:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e200029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:14 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:15 compute-0 ceph-mon[74327]: pgmap v842: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 06 10:09:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:15.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:15 compute-0 sudo[265988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:15 compute-0 sudo[265988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:15 compute-0 sudo[265988]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:15 compute-0 nova_compute[254819]: 2025-12-06 10:09:15.593 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:15 compute-0 sudo[266013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 06 10:09:15 compute-0 sudo[266013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:09:15 compute-0 sudo[266013]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:09:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:09:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:09:15 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3189126703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:09:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 sudo[266060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:16 compute-0 sudo[266060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:16 compute-0 sudo[266060]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:16 compute-0 sudo[266085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:09:16 compute-0 sudo[266085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18001ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:16 compute-0 sudo[266085]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:09:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:09:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:09:16 compute-0 sudo[266142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:16 compute-0 sudo[266142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:16 compute-0 sudo[266142]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:16 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:16 compute-0 sudo[266167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:09:16 compute-0 sudo[266167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:17.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:17 compute-0 ceph-mon[74327]: pgmap v843: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3347373695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:09:17 compute-0 ceph-mon[74327]: pgmap v844: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:09:17 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:09:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:17.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:17 compute-0 nova_compute[254819]: 2025-12-06 10:09:17.338 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.463754513 +0000 UTC m=+0.057428275 container create f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:09:17 compute-0 systemd[1]: Started libpod-conmon-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope.
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.438850959 +0000 UTC m=+0.032524701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.575950922 +0000 UTC m=+0.169624674 container init f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.592056133 +0000 UTC m=+0.185729845 container start f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.595447703 +0000 UTC m=+0.189121505 container attach f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:09:17 compute-0 angry_napier[266253]: 167 167
Dec 06 10:09:17 compute-0 systemd[1]: libpod-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope: Deactivated successfully.
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.603343624 +0000 UTC m=+0.197017386 container died f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7571f8847bc2b336cebb8e4b0b1fc704059f2058e7693785ee9bd13791e0c578-merged.mount: Deactivated successfully.
Dec 06 10:09:17 compute-0 podman[266235]: 2025-12-06 10:09:17.657660547 +0000 UTC m=+0.251334309 container remove f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:09:17 compute-0 systemd[1]: libpod-conmon-f3c10d973b152b0aece4174b79659043472ccc7ddf0745cc54b60fae8e32387e.scope: Deactivated successfully.
Dec 06 10:09:17 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:17.796 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:09:17 compute-0 nova_compute[254819]: 2025-12-06 10:09:17.799 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:17 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:17.799 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:09:17 compute-0 podman[266277]: 2025-12-06 10:09:17.910442122 +0000 UTC m=+0.063126378 container create d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 10:09:17 compute-0 systemd[1]: Started libpod-conmon-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope.
Dec 06 10:09:17 compute-0 podman[266277]: 2025-12-06 10:09:17.883072391 +0000 UTC m=+0.035756647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:18 compute-0 podman[266277]: 2025-12-06 10:09:18.021046318 +0000 UTC m=+0.173730554 container init d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 10:09:18 compute-0 podman[266277]: 2025-12-06 10:09:18.027527672 +0000 UTC m=+0.180211898 container start d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:09:18 compute-0 podman[266277]: 2025-12-06 10:09:18.031992781 +0000 UTC m=+0.184677017 container attach d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:18 compute-0 flamboyant_rhodes[266293]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:09:18 compute-0 flamboyant_rhodes[266293]: --> All data devices are unavailable
Dec 06 10:09:18 compute-0 systemd[1]: libpod-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope: Deactivated successfully.
Dec 06 10:09:18 compute-0 podman[266277]: 2025-12-06 10:09:18.49217938 +0000 UTC m=+0.644863606 container died d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 10:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd497d5cff615d87194c873b4e4c547b4de1e8653f54a6f013293fb1b7e7bcfc-merged.mount: Deactivated successfully.
Dec 06 10:09:18 compute-0 podman[266277]: 2025-12-06 10:09:18.54303803 +0000 UTC m=+0.695722266 container remove d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_rhodes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 10:09:18 compute-0 systemd[1]: libpod-conmon-d8e2d2afa247b0defa23e2af0f662854889c141a6cb66c58768ab4287ec48f70.scope: Deactivated successfully.
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:18 compute-0 sudo[266167]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:18 compute-0 sudo[266321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:18 compute-0 sudo[266321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:18 compute-0 sudo[266321]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:18 compute-0 sudo[266346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:09:18 compute-0 sudo[266346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:18.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:18 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.380451651 +0000 UTC m=+0.061209607 container create 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 10:09:19 compute-0 systemd[1]: Started libpod-conmon-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope.
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.361143795 +0000 UTC m=+0.041901771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.481979364 +0000 UTC m=+0.162737330 container init 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.494747075 +0000 UTC m=+0.175505061 container start 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.503245422 +0000 UTC m=+0.184003388 container attach 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:09:19 compute-0 dreamy_shannon[266430]: 167 167
Dec 06 10:09:19 compute-0 systemd[1]: libpod-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope: Deactivated successfully.
Dec 06 10:09:19 compute-0 conmon[266430]: conmon 82fe9d465ef37788ca7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope/container/memory.events
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.508832361 +0000 UTC m=+0.189590347 container died 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30384813c6d1f79449a6b85afed6f70b869d2639df526d1ef9c3f7a97ec1749-merged.mount: Deactivated successfully.
Dec 06 10:09:19 compute-0 podman[266414]: 2025-12-06 10:09:19.563586155 +0000 UTC m=+0.244344141 container remove 82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_shannon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:09:19 compute-0 systemd[1]: libpod-conmon-82fe9d465ef37788ca7a09849e0e16eb89f68ccd8dba668ffd0ecbbbf331d06d.scope: Deactivated successfully.
Dec 06 10:09:19 compute-0 podman[266455]: 2025-12-06 10:09:19.786460792 +0000 UTC m=+0.068378859 container create 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:19 compute-0 systemd[1]: Started libpod-conmon-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope.
Dec 06 10:09:19 compute-0 podman[266455]: 2025-12-06 10:09:19.761374402 +0000 UTC m=+0.043292479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:19 compute-0 ceph-mon[74327]: pgmap v845: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:19 compute-0 podman[266455]: 2025-12-06 10:09:19.917264298 +0000 UTC m=+0.199182425 container init 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:09:19 compute-0 podman[266455]: 2025-12-06 10:09:19.931547759 +0000 UTC m=+0.213465806 container start 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:09:19 compute-0 podman[266455]: 2025-12-06 10:09:19.937654553 +0000 UTC m=+0.219572600 container attach 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]: {
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:     "1": [
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:         {
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "devices": [
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "/dev/loop3"
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             ],
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "lv_name": "ceph_lv0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "lv_size": "21470642176",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "name": "ceph_lv0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "tags": {
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.cluster_name": "ceph",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.crush_device_class": "",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.encrypted": "0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.osd_id": "1",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.type": "block",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.vdo": "0",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:                 "ceph.with_tpm": "0"
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             },
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "type": "block",
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:             "vg_name": "ceph_vg0"
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:         }
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]:     ]
Dec 06 10:09:20 compute-0 affectionate_hofstadter[266471]: }
Dec 06 10:09:20 compute-0 systemd[1]: libpod-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope: Deactivated successfully.
Dec 06 10:09:20 compute-0 podman[266455]: 2025-12-06 10:09:20.273454397 +0000 UTC m=+0.555372434 container died 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-57e5cb8b54655ed78e5552bce28f8558596b166bcdb18ecaa9b1bc71e999b642-merged.mount: Deactivated successfully.
Dec 06 10:09:20 compute-0 podman[266455]: 2025-12-06 10:09:20.319594171 +0000 UTC m=+0.601512208 container remove 692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hofstadter, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 06 10:09:20 compute-0 systemd[1]: libpod-conmon-692e832a55e2efe1330cc2549f3e7838ad8daf085ef0fa7172f509d7d177ce25.scope: Deactivated successfully.
Dec 06 10:09:20 compute-0 sudo[266346]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:20 compute-0 sudo[266492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:20 compute-0 sudo[266492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:20 compute-0 sudo[266492]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:20 compute-0 sudo[266517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:09:20 compute-0 sudo[266517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:20 compute-0 nova_compute[254819]: 2025-12-06 10:09:20.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec 06 10:09:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Dec 06 10:09:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:20 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.090079233 +0000 UTC m=+0.049061133 container create 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:09:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:21 compute-0 systemd[1]: Started libpod-conmon-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope.
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.068714102 +0000 UTC m=+0.027696042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.194785111 +0000 UTC m=+0.153767071 container init 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.20220486 +0000 UTC m=+0.161186770 container start 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.206089703 +0000 UTC m=+0.165071653 container attach 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:09:21 compute-0 distracted_banzai[266601]: 167 167
Dec 06 10:09:21 compute-0 systemd[1]: libpod-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope: Deactivated successfully.
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.210237764 +0000 UTC m=+0.169219664 container died 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2629bcebe6ab5ae7a4096184b891d164b52b8863b31a2259ea4b879e097226cc-merged.mount: Deactivated successfully.
Dec 06 10:09:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:21 compute-0 podman[266584]: 2025-12-06 10:09:21.254783565 +0000 UTC m=+0.213765505 container remove 7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 10:09:21 compute-0 systemd[1]: libpod-conmon-7b128d1ede8b3e1cc107290b46b3765acd6f20035df3413e88fd83dfee052df4.scope: Deactivated successfully.
Dec 06 10:09:21 compute-0 podman[266624]: 2025-12-06 10:09:21.483475806 +0000 UTC m=+0.064682179 container create 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:09:21 compute-0 systemd[1]: Started libpod-conmon-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope.
Dec 06 10:09:21 compute-0 podman[266624]: 2025-12-06 10:09:21.463138293 +0000 UTC m=+0.044344686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:21 compute-0 podman[266624]: 2025-12-06 10:09:21.581350702 +0000 UTC m=+0.162557175 container init 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:21 compute-0 podman[266624]: 2025-12-06 10:09:21.591926095 +0000 UTC m=+0.173132478 container start 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:09:21 compute-0 podman[266624]: 2025-12-06 10:09:21.59587536 +0000 UTC m=+0.177081733 container attach 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:09:21 compute-0 ceph-mon[74327]: pgmap v846: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:22 compute-0 lvm[266716]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:09:22 compute-0 lvm[266716]: VG ceph_vg0 finished
Dec 06 10:09:22 compute-0 nova_compute[254819]: 2025-12-06 10:09:22.389 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:22 compute-0 elegant_ptolemy[266641]: {}
Dec 06 10:09:22 compute-0 systemd[1]: libpod-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Deactivated successfully.
Dec 06 10:09:22 compute-0 systemd[1]: libpod-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Consumed 1.325s CPU time.
Dec 06 10:09:22 compute-0 podman[266624]: 2025-12-06 10:09:22.431093643 +0000 UTC m=+1.012300046 container died 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:09:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7deb981d59206cf91756416fcad763f204b1a7c8aa45cc6124f9f520051166b3-merged.mount: Deactivated successfully.
Dec 06 10:09:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:22 compute-0 podman[266624]: 2025-12-06 10:09:22.620599328 +0000 UTC m=+1.201805701 container remove 77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:09:22 compute-0 systemd[1]: libpod-conmon-77613a172534c2612e1272f3376c47474102f962cb7dd159cb9a0b90a5290221.scope: Deactivated successfully.
Dec 06 10:09:22 compute-0 sudo[266517]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:09:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:22 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:09:22 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:22 compute-0 sudo[266734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:22 compute-0 sudo[266734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:22 compute-0 sudo[266734]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:22 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:09:23 compute-0 ceph-mon[74327]: pgmap v847: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:09:23
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control']
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:09:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:09:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:23 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011083224466041544 of space, bias 1.0, pg target 0.3324967339812463 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:09:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:09:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:24 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:24.802 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:09:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 10:09:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:24 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:25.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:25 compute-0 nova_compute[254819]: 2025-12-06 10:09:25.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:25 compute-0 ceph-mon[74327]: pgmap v848: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 10:09:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 10:09:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:26 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:27.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:27.286Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:27 compute-0 nova_compute[254819]: 2025-12-06 10:09:27.393 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:27 compute-0 ceph-mon[74327]: pgmap v849: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:09:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:28 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e00003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:29.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:29 compute-0 ceph-mon[74327]: pgmap v850: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 10:09:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e18003380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e20003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:30 compute-0 nova_compute[254819]: 2025-12-06 10:09:30.600 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 06 10:09:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec 06 10:09:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Dec 06 10:09:30 compute-0 ceph-mon[74327]: pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 06 10:09:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[265787]: 06/12/2025 10:09:30 : epoch 693400b2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3e14003a30 fd 38 proxy ignored for local
Dec 06 10:09:30 compute-0 kernel: ganesha.nfsd[265892]: segfault at 50 ip 00007f3ed46ea32e sp 00007f3ea67fb210 error 4 in libntirpc.so.5.8[7f3ed46cf000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 06 10:09:30 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:09:31 compute-0 systemd[1]: Started Process Core Dump (PID 266767/UID 0).
Dec 06 10:09:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:31.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:32 compute-0 nova_compute[254819]: 2025-12-06 10:09:32.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:32 compute-0 sudo[266771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:09:32 compute-0 sudo[266771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:32 compute-0 sudo[266771]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 06 10:09:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:33.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:33 compute-0 podman[266797]: 2025-12-06 10:09:33.455503875 +0000 UTC m=+0.081096569 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 06 10:09:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:34 compute-0 systemd-coredump[266768]: Process 265791 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 44:
                                                    #0  0x00007f3ed46ea32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:09:34 compute-0 ceph-mon[74327]: pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 06 10:09:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:09:34 compute-0 systemd[1]: systemd-coredump@10-266767-0.service: Deactivated successfully.
Dec 06 10:09:34 compute-0 systemd[1]: systemd-coredump@10-266767-0.service: Consumed 1.292s CPU time.
Dec 06 10:09:34 compute-0 podman[266823]: 2025-12-06 10:09:34.903554506 +0000 UTC m=+0.030575608 container died af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-042dbc87b14c7e62417a7a4804c45e91c691e54e6f21f825478d88a0b2bd6aee-merged.mount: Deactivated successfully.
Dec 06 10:09:35 compute-0 podman[266823]: 2025-12-06 10:09:35.065626708 +0000 UTC m=+0.192647770 container remove af69e9a47df8ecde800ecab5adbfc1ec516b668507faf977fed781c1bc7fd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 10:09:35 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:09:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 10:09:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:35.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 10:09:35 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:09:35 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.496s CPU time.
Dec 06 10:09:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:35 compute-0 nova_compute[254819]: 2025-12-06 10:09:35.643 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:36 compute-0 ceph-mon[74327]: pgmap v853: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:09:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:37.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:37.287Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:37.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:37 compute-0 ceph-mon[74327]: pgmap v854: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:37 compute-0 nova_compute[254819]: 2025-12-06 10:09:37.401 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:09:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s
                                           Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:09:38 compute-0 podman[266870]: 2025-12-06 10:09:38.499715659 +0000 UTC m=+0.125453814 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:09:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:09:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:38.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:09:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/100938 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:09:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:39.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:39 compute-0 ceph-mon[74327]: pgmap v855: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:09:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.692 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.776 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.777 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:09:40 compute-0 nova_compute[254819]: 2025-12-06 10:09:40.778 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:09:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec 06 10:09:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec 06 10:09:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:41 compute-0 nova_compute[254819]: 2025-12-06 10:09:41.580 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.802s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:09:41 compute-0 ceph-mon[74327]: pgmap v856: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:41 compute-0 nova_compute[254819]: 2025-12-06 10:09:41.805 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:09:41 compute-0 nova_compute[254819]: 2025-12-06 10:09:41.805 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.009 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.010 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4410MB free_disk=59.89716339111328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.011 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.011 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.090 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 112440c2-8dcc-4a19-9d83-5489df97079a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.091 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.091 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.130 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.404 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:09:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979909076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.635 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.642 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.677 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.682 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:09:42 compute-0 nova_compute[254819]: 2025-12-06 10:09:42.683 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4286393203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/979909076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:43 compute-0 podman[266946]: 2025-12-06 10:09:43.454579614 +0000 UTC m=+0.087645923 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:09:44 compute-0 ceph-mon[74327]: pgmap v857: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:09:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3729956179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.684 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.684 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.685 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.685 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:09:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.958 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.959 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.959 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:09:44 compute-0 nova_compute[254819]: 2025-12-06 10:09:44.960 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:09:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3709469109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3647347951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:45 compute-0 ceph-mon[74327]: pgmap v858: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 10:09:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:45 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 11.
Dec 06 10:09:45 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:09:45 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 1.496s CPU time.
Dec 06 10:09:45 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258...
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.620 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.621 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.622 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.624 254824 INFO nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Terminating instance
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.625 254824 DEBUG nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:09:45 compute-0 podman[267021]: 2025-12-06 10:09:45.680899575 +0000 UTC m=+0.071234224 container create c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 10:09:45 compute-0 kernel: tap2d0118f7-94 (unregistering): left promiscuous mode
Dec 06 10:09:45 compute-0 NetworkManager[48882]: <info>  [1765015785.6914] device (tap2d0118f7-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:09:45 compute-0 ovn_controller[152417]: 2025-12-06T10:09:45Z|00062|binding|INFO|Releasing lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae from this chassis (sb_readonly=0)
Dec 06 10:09:45 compute-0 ovn_controller[152417]: 2025-12-06T10:09:45Z|00063|binding|INFO|Setting lport 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae down in Southbound
Dec 06 10:09:45 compute-0 ovn_controller[152417]: 2025-12-06T10:09:45Z|00064|binding|INFO|Removing iface tap2d0118f7-94 ovn-installed in OVS
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.736 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.739 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 podman[267021]: 2025-12-06 10:09:45.657131251 +0000 UTC m=+0.047465980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:09:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.750 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:37:0e 10.100.0.5'], port_security=['fa:16:3e:b4:37:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '112440c2-8dcc-4a19-9d83-5489df97079a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3027a471-10b5-4a61-b09a-0f0e6072fde1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=611cd505-2a02-4d45-a906-bd97d1447953, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:09:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.751 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 2d0118f7-94f6-43f6-a67f-28e0faf9c3ae in datapath dccd9941-4f3e-4086-b9cd-651d8e99e8ec unbound from our chassis
Dec 06 10:09:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.753 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dccd9941-4f3e-4086-b9cd-651d8e99e8ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:09:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.754 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fee4a75d-615e-465b-ab9b-aebdfe48c8d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:45 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:45.755 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec namespace which is not needed anymore
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.766 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.dfwxck-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:45 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 06 10:09:45 compute-0 podman[267021]: 2025-12-06 10:09:45.802245688 +0000 UTC m=+0.192580387 container init c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 10:09:45 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 15.777s CPU time.
Dec 06 10:09:45 compute-0 systemd-machined[216202]: Machine qemu-3-instance-00000004 terminated.
Dec 06 10:09:45 compute-0 podman[267021]: 2025-12-06 10:09:45.809884253 +0000 UTC m=+0.200218912 container start c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:09:45 compute-0 bash[267021]: c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 06 10:09:45 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.856 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.866 254824 INFO nova.virt.libvirt.driver [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Instance destroyed successfully.
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.867 254824 DEBUG nova.objects.instance [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 112440c2-8dcc-4a19-9d83-5489df97079a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.880 254824 DEBUG nova.virt.libvirt.vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-609462386',display_name='tempest-TestNetworkBasicOps-server-609462386',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-609462386',id=4,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB71wqy4Vx0ThrIuit7bIMfXK6YLKUBZN1lhipBZkl9t8qtDE6kg/NsSamOzTH/a+zjpG46+Awuo3QHJ780QH0C6lo/2uOHg18NVMuqh+pfDOXzTKYCxhRCIxLSg0ck4w==',key_name='tempest-TestNetworkBasicOps-1991615071',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:08:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-ykqs2wqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:08:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=112440c2-8dcc-4a19-9d83-5489df97079a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.880 254824 DEBUG nova.network.os_vif_util [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.881 254824 DEBUG nova.network.os_vif_util [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.882 254824 DEBUG os_vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.884 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.885 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d0118f7-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.890 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 06 10:09:45 compute-0 nova_compute[254819]: 2025-12-06 10:09:45.897 254824 INFO os_vif [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:37:0e,bridge_name='br-int',has_traffic_filtering=True,id=2d0118f7-94f6-43f6-a67f-28e0faf9c3ae,network=Network(dccd9941-4f3e-4086-b9cd-651d8e99e8ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d0118f7-94')
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 06 10:09:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : haproxy version is 2.8.14-c23fe91
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [NOTICE]   (265656) : path to executable is /usr/sbin/haproxy
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : Exiting Master process...
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : Exiting Master process...
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [ALERT]    (265656) : Current worker (265658) exited with code 143 (Terminated)
Dec 06 10:09:45 compute-0 neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec[265633]: [WARNING]  (265656) : All workers exited. Exiting... (0)
Dec 06 10:09:45 compute-0 systemd[1]: libpod-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope: Deactivated successfully.
Dec 06 10:09:45 compute-0 podman[267090]: 2025-12-06 10:09:45.968823741 +0000 UTC m=+0.055850414 container died b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9-userdata-shm.mount: Deactivated successfully.
Dec 06 10:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb19a4954b41a05ededd94f9209e0e9572500e71415f9c5c428921ac41b73efd-merged.mount: Deactivated successfully.
Dec 06 10:09:46 compute-0 podman[267090]: 2025-12-06 10:09:46.020979524 +0000 UTC m=+0.108006197 container cleanup b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:46 compute-0 systemd[1]: libpod-conmon-b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9.scope: Deactivated successfully.
Dec 06 10:09:46 compute-0 podman[267161]: 2025-12-06 10:09:46.087746068 +0000 UTC m=+0.044496550 container remove b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.106 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b6462c-3d40-4af0-9ec2-c5dcb6a12ada]: (4, ('Sat Dec  6 10:09:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec (b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9)\nb2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9\nSat Dec  6 10:09:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec (b2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9)\nb2323c34c8c570910b87213790a21d1c9563369a938b6f81158f55defebfebc9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.109 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3f682c74-65bb-4317-b8e8-dea8c4ef13b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.111 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdccd9941-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.114 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:46 compute-0 kernel: tapdccd9941-40: left promiscuous mode
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.127 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.132 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2f97218f-6b79-4539-84ad-9661af72f9fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.155 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[141c129e-5d91-4a08-975a-3246fe731e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.157 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa083e4-d77d-4d59-96fe-04cea738c0e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.188 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[78d7f307-d689-4a41-b71c-c053d10a1a99]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409069, 'reachable_time': 44376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267177, 'error': None, 'target': 'ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.191 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dccd9941-4f3e-4086-b9cd-651d8e99e8ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:09:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:46.191 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[2c85a3d2-3c5a-42b5-b9ee-d2ddeb756e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:09:46 compute-0 systemd[1]: run-netns-ovnmeta\x2ddccd9941\x2d4f3e\x2d4086\x2db9cd\x2d651d8e99e8ec.mount: Deactivated successfully.
Dec 06 10:09:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2942484271' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:09:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2942484271' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.340 254824 INFO nova.virt.libvirt.driver [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deleting instance files /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a_del
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.341 254824 INFO nova.virt.libvirt.driver [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deletion of /var/lib/nova/instances/112440c2-8dcc-4a19-9d83-5489df97079a_del complete
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.392 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.393 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.393 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.394 254824 DEBUG oslo_concurrency.lockutils [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.394 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.395 254824 DEBUG nova.compute.manager [req-dc0abd97-ec69-4a80-858e-f55932d06c64 req-75b66201-a11e-4a04-a75d-70768bf5a872 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-unplugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.422 254824 INFO nova.compute.manager [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 0.80 seconds to destroy the instance on the hypervisor.
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.423 254824 DEBUG oslo.service.loopingcall [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.424 254824 DEBUG nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.425 254824 DEBUG nova.network.neutron [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.470 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [{"id": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "address": "fa:16:3e:b4:37:0e", "network": {"id": "dccd9941-4f3e-4086-b9cd-651d8e99e8ec", "bridge": "br-int", "label": "tempest-network-smoke--1290241953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d0118f7-94", "ovs_interfaceid": "2d0118f7-94f6-43f6-a67f-28e0faf9c3ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.493 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-112440c2-8dcc-4a19-9d83-5489df97079a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:46 compute-0 nova_compute[254819]: 2025-12-06 10:09:46.494 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:09:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 10:09:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.288Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:09:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.288Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:09:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:47.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.303 254824 DEBUG nova.network.neutron [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.323 254824 INFO nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Took 0.90 seconds to deallocate network for instance.
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.365 254824 DEBUG nova.compute.manager [req-5c35bc26-66a7-4a41-9e24-dca0e7864753 req-414fb5e5-1b72-4d5b-836f-a936427cdaf3 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-deleted-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.368 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.368 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:47 compute-0 nova_compute[254819]: 2025-12-06 10:09:47.416 254824 DEBUG oslo_concurrency.processutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:09:47 compute-0 ceph-mon[74327]: pgmap v859: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 10:09:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1962556856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.480 254824 DEBUG nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.482 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.483 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.483 254824 DEBUG oslo_concurrency.lockutils [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.484 254824 DEBUG nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] No waiting events found dispatching network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.485 254824 WARNING nova.compute.manager [req-e41b1a2b-d301-400a-9056-61a7e4ed1042 req-2f8680d5-bb05-4958-b750-5cc17eaa14bd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Received unexpected event network-vif-plugged-2d0118f7-94f6-43f6-a67f-28e0faf9c3ae for instance with vm_state deleted and task_state None.
Dec 06 10:09:48 compute-0 nova_compute[254819]: 2025-12-06 10:09:48.487 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 48 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 53 op/s
Dec 06 10:09:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:49.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:09:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279661780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:49 compute-0 nova_compute[254819]: 2025-12-06 10:09:49.046 254824 DEBUG oslo_concurrency.processutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:09:49 compute-0 nova_compute[254819]: 2025-12-06 10:09:49.055 254824 DEBUG nova.compute.provider_tree [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:09:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3467548251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:49 compute-0 ceph-mon[74327]: pgmap v860: 337 pgs: 337 active+clean; 48 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 53 op/s
Dec 06 10:09:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/279661780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:09:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:49.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:50 compute-0 nova_compute[254819]: 2025-12-06 10:09:50.743 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 06 10:09:50 compute-0 nova_compute[254819]: 2025-12-06 10:09:50.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec 06 10:09:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Dec 06 10:09:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:51 compute-0 nova_compute[254819]: 2025-12-06 10:09:51.138 254824 DEBUG nova.scheduler.client.report [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:09:51 compute-0 nova_compute[254819]: 2025-12-06 10:09:51.177 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:51 compute-0 ceph-mon[74327]: pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 06 10:09:51 compute-0 nova_compute[254819]: 2025-12-06 10:09:51.248 254824 INFO nova.scheduler.client.report [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 112440c2-8dcc-4a19-9d83-5489df97079a
Dec 06 10:09:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:51 compute-0 nova_compute[254819]: 2025-12-06 10:09:51.309 254824 DEBUG oslo_concurrency.lockutils [None req-4d612999-ad2d-46a2-bbb2-018c60ac15c8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "112440c2-8dcc-4a19-9d83-5489df97079a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:09:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:09:52 compute-0 sudo[267207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:09:52 compute-0 sudo[267207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:09:52 compute-0 sudo[267207]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 06 10:09:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:09:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:53.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:09:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:53 compute-0 ceph-mon[74327]: pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec 06 10:09:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:09:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:09:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.241 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:09:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Dec 06 10:09:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:09:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:09:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:09:55 compute-0 nova_compute[254819]: 2025-12-06 10:09:55.744 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:55 compute-0 nova_compute[254819]: 2025-12-06 10:09:55.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:55 compute-0 ceph-mon[74327]: pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Dec 06 10:09:56 compute-0 nova_compute[254819]: 2025-12-06 10:09:56.292 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:56 compute-0 nova_compute[254819]: 2025-12-06 10:09:56.405 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:09:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Dec 06 10:09:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:57.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:57 compute-0 ceph-mon[74327]: pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4000fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Dec 06 10:09:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:09:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:09:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:09:59.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:09:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:09:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:09:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:09:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:09:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:09:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:09:59 compute-0 ceph-mon[74327]: pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Dec 06 10:10:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec 06 10:10:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 06 10:10:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0 is in unknown state
Dec 06 10:10:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:00 compute-0 nova_compute[254819]: 2025-12-06 10:10:00.748 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec 06 10:10:00 compute-0 nova_compute[254819]: 2025-12-06 10:10:00.864 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015785.8629587, 112440c2-8dcc-4a19-9d83-5489df97079a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:10:00 compute-0 nova_compute[254819]: 2025-12-06 10:10:00.864 254824 INFO nova.compute.manager [-] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] VM Stopped (Lifecycle Event)
Dec 06 10:10:00 compute-0 nova_compute[254819]: 2025-12-06 10:10:00.887 254824 DEBUG nova.compute.manager [None req-0208a515-bbfb-4354-9b62-fa978d41f879 - - - - - -] [instance: 112440c2-8dcc-4a19-9d83-5489df97079a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:10:00 compute-0 nova_compute[254819]: 2025-12-06 10:10:00.891 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:10:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:10:00 compute-0 ceph-mon[74327]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec 06 10:10:00 compute-0 ceph-mon[74327]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 06 10:10:00 compute-0 ceph-mon[74327]:     daemon nfs.cephfs.2.0.compute-0.dfwxck on compute-0 is in unknown state
Dec 06 10:10:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101001 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:10:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:02 compute-0 ceph-mon[74327]: pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.4 KiB/s wr, 5 op/s
Dec 06 10:10:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:10:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:03 compute-0 ceph-mon[74327]: pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:10:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:04 compute-0 podman[267261]: 2025-12-06 10:10:04.495685982 +0000 UTC m=+0.109299553 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:10:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:05 compute-0 nova_compute[254819]: 2025-12-06 10:10:05.749 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:05 compute-0 nova_compute[254819]: 2025-12-06 10:10:05.892 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:05 compute-0 ceph-mon[74327]: pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 06 10:10:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:10:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:07.291Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:07.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:07 compute-0 ceph-mon[74327]: pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:10:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:10:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:10:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:08 compute-0 ceph-mon[74327]: pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:10:08 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:09.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:09.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:09.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:09 compute-0 podman[267287]: 2025-12-06 10:10:09.502378673 +0000 UTC m=+0.130604411 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:10:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:10 compute-0 nova_compute[254819]: 2025-12-06 10:10:10.750 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:10 compute-0 nova_compute[254819]: 2025-12-06 10:10:10.894 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:10:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:10:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f66080023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:11 compute-0 ceph-mon[74327]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:12 compute-0 sudo[267316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:10:12 compute-0 sudo[267316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:12 compute-0 sudo[267316]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.728 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.729 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.750 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.837 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.838 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.849 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:10:13 compute-0 nova_compute[254819]: 2025-12-06 10:10:13.850 254824 INFO nova.compute.claims [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:10:13 compute-0 ceph-mon[74327]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.000 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:14 compute-0 podman[267363]: 2025-12-06 10:10:14.458074541 +0000 UTC m=+0.084056388 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:10:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:10:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879393744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.547 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.556 254824 DEBUG nova.compute.provider_tree [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.595 254824 DEBUG nova.scheduler.client.report [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:10:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.628 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.630 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.681 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.682 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.702 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.719 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:10:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.811 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.813 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.814 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating image(s)
Dec 06 10:10:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.852 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.881 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.913 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:14 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3879393744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.918 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:14 compute-0 nova_compute[254819]: 2025-12-06 10:10:14.948 254824 DEBUG nova.policy [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:10:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.008 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.009 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.010 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.010 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.038 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.045 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 467f8e9a-e166-409e-920c-689fea4ea3f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.352 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 467f8e9a-e166-409e-920c-689fea4ea3f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.437 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.569 254824 DEBUG nova.objects.instance [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.586 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.587 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Ensure instance console log exists: /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.588 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.589 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.590 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.601 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully created port: ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:15 compute-0 nova_compute[254819]: 2025-12-06 10:10:15.896 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:15 compute-0 ceph-mon[74327]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:10:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.671 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully updated port: ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.689 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.690 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.690 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.761 254824 DEBUG nova.compute.manager [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.762 254824 DEBUG nova.compute.manager [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:10:16 compute-0 nova_compute[254819]: 2025-12-06 10:10:16.762 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:10:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.146 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:10:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:17.292Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:10:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:17.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:17.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.796 254824 DEBUG nova.network.neutron [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.820 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.821 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance network_info: |[{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.823 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.824 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.830 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start _get_guest_xml network_info=[{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.837 254824 WARNING nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.847 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.848 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.852 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.853 254824 DEBUG nova.virt.libvirt.host [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.853 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.854 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.855 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.856 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.856 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.857 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.857 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.858 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.858 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.859 254824 DEBUG nova.virt.hardware [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:10:17 compute-0 nova_compute[254819]: 2025-12-06 10:10:17.865 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:17 compute-0 ceph-mon[74327]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:10:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:10:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273770741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.325 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.398 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.403 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:10:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944879998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:10:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.845 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.847 254824 DEBUG nova.virt.libvirt.vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:10:14Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.847 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.848 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.849 254824 DEBUG nova.objects.instance [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.866 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <name>instance-00000006</name>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:10:17</nova:creationTime>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:10:18 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <system>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="serial">467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="uuid">467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </system>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <os>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </os>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <features>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </features>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk">
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </source>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config">
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </source>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:10:18 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:64:9d:d4"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <target dev="tapec2bc9a6-15"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log" append="off"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <video>
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </video>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:10:18 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:10:18 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:10:18 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:10:18 compute-0 nova_compute[254819]: </domain>
Dec 06 10:10:18 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.868 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Preparing to wait for external event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.868 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.869 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.869 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.870 254824 DEBUG nova.virt.libvirt.vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:10:14Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.870 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.871 254824 DEBUG nova.network.os_vif_util [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.872 254824 DEBUG os_vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.873 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.873 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.874 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.878 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec2bc9a6-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.879 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec2bc9a6-15, col_values=(('external_ids', {'iface-id': 'ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:9d:d4', 'vm-uuid': '467f8e9a-e166-409e-920c-689fea4ea3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.880 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:18 compute-0 NetworkManager[48882]: <info>  [1765015818.8814] manager: (tapec2bc9a6-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.891 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.892 254824 INFO os_vif [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15')
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.917 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.918 254824 DEBUG nova.network.neutron [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.952 254824 DEBUG oslo_concurrency.lockutils [req-60ab4015-ade8-4b94-92dd-e6ea7917faee req-1b4bdfd5-040d-4146-9ea8-2bd77c9cde2c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.975 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:10:18 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4273770741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:10:18 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2944879998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.975 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.976 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:64:9d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:10:18 compute-0 nova_compute[254819]: 2025-12-06 10:10:18.977 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Using config drive
Dec 06 10:10:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:19.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:10:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:19.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:10:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.019 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.396 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Creating config drive at /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.404 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_jpp5rc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.550 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_jpp5rc" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.601 254824 DEBUG nova.storage.rbd_utils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.607 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.783 254824 DEBUG oslo_concurrency.processutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config 467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.784 254824 INFO nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deleting local config drive /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/disk.config because it was imported into RBD.
Dec 06 10:10:19 compute-0 kernel: tapec2bc9a6-15: entered promiscuous mode
Dec 06 10:10:19 compute-0 NetworkManager[48882]: <info>  [1765015819.8679] manager: (tapec2bc9a6-15): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec 06 10:10:19 compute-0 ovn_controller[152417]: 2025-12-06T10:10:19Z|00065|binding|INFO|Claiming lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for this chassis.
Dec 06 10:10:19 compute-0 ovn_controller[152417]: 2025-12-06T10:10:19Z|00066|binding|INFO|ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b: Claiming fa:16:3e:64:9d:d4 10.100.0.14
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.872 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.877 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.888 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.890 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd bound to our chassis
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.892 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.910 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[09b5113e-bfbf-435f-ad16-0d3391f6265b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.912 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d76af3c-e1 in ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.915 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d76af3c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.915 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5a26638b-4c5c-488b-b7b4-c3fbf67bf72f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.916 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[99280c6c-76e3-4ab0-a872-222d858fc90d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:19 compute-0 systemd-udevd[267690]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:10:19 compute-0 systemd-machined[216202]: New machine qemu-4-instance-00000006.
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.933 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[3abf1e0f-ca8e-4f09-9bff-e48b8acba54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:19 compute-0 NetworkManager[48882]: <info>  [1765015819.9373] device (tapec2bc9a6-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:10:19 compute-0 NetworkManager[48882]: <info>  [1765015819.9386] device (tapec2bc9a6-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.941 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:19 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.948 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:19 compute-0 ovn_controller[152417]: 2025-12-06T10:10:19Z|00067|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b ovn-installed in OVS
Dec 06 10:10:19 compute-0 ovn_controller[152417]: 2025-12-06T10:10:19Z|00068|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b up in Southbound
Dec 06 10:10:19 compute-0 nova_compute[254819]: 2025-12-06 10:10:19.951 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.955 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bd727bd4-1aac-46f7-9f04-fe6203924d53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:19.999 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[67ede21b-c457-4beb-8ed4-95d0d576a32e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.006 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[986af9f7-dd31-4b3f-b740-0a53e66b2cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 systemd-udevd[267693]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:10:20 compute-0 NetworkManager[48882]: <info>  [1765015820.0070] manager: (tap4d76af3c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.048 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[27a86b5e-d1b9-4d1e-94ea-eccbe5fdf1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.052 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0e404d-2a7b-4d5a-878f-f146463fdafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ceph-mon[74327]: pgmap v875: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:20 compute-0 NetworkManager[48882]: <info>  [1765015820.0895] device (tap4d76af3c-e0): carrier: link connected
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.100 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8ed00a-2570-42f3-8fd0-230f5d398141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.126 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[67b9480f-fbeb-46e0-b425-1874484a3ac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d76af3c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419057, 'reachable_time': 23811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267722, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.145 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cf59e66a-ffa1-495b-9d6e-99ff633c2a64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419057, 'tstamp': 419057}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267738, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.164 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e6927880-4b98-4d34-ab0d-58e7f279b1fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d76af3c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419057, 'reachable_time': 23811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267742, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.200 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9b799b68-51cb-4059-a312-5e2ceb34b198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.278 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6bead5ac-1116-41ff-8f4e-b29ae413be6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d76af3c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.280 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d76af3c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:20 compute-0 NetworkManager[48882]: <info>  [1765015820.2839] manager: (tap4d76af3c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 06 10:10:20 compute-0 kernel: tap4d76af3c-e0: entered promiscuous mode
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.285 254824 DEBUG nova.compute.manager [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.286 254824 DEBUG oslo_concurrency.lockutils [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.287 254824 DEBUG nova.compute.manager [req-3726917c-b704-4a0f-a91b-4ce5a7ff5b6c req-71a70845-7dcd-4852-ab9f-b108e2909f77 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Processing event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.287 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.287 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d76af3c-e0, col_values=(('external_ids', {'iface-id': '9f6682d5-4069-4017-8320-2e242e2a8f66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:20 compute-0 ovn_controller[152417]: 2025-12-06T10:10:20Z|00069|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.289 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.290 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41749b-2941-4c31-a3a3-6b1b35ae7d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.291 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-4d76af3c-ede9-445b-bea0-ba96a2eaeddd
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.pid.haproxy
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID 4d76af3c-ede9-445b-bea0-ba96a2eaeddd
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.291 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'env', 'PROCESS_TAG=haproxy-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d76af3c-ede9-445b-bea0-ba96a2eaeddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.303 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.323 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.3223214, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.323 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Started (Lifecycle Event)
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.325 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.330 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.334 254824 INFO nova.virt.libvirt.driver [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance spawned successfully.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.335 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.349 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.357 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.361 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.361 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.362 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.362 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.363 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.363 254824 DEBUG nova.virt.libvirt.driver [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.390 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.391 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.322769, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.391 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Paused (Lifecycle Event)
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.402 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.402 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.421 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.425 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015820.32821, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.425 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Resumed (Lifecycle Event)
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.429 254824 INFO nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 5.62 seconds to spawn the instance on the hypervisor.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.430 254824 DEBUG nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.464 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.467 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:10:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.501 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.511 254824 INFO nova.compute.manager [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 6.72 seconds to build instance.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.530 254824 DEBUG oslo_concurrency.lockutils [None req-55520d59-3b01-45c1-af21-e2cf387624ad 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:20 compute-0 podman[267798]: 2025-12-06 10:10:20.690136211 +0000 UTC m=+0.055131805 container create 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:10:20 compute-0 systemd[1]: Started libpod-conmon-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope.
Dec 06 10:10:20 compute-0 podman[267798]: 2025-12-06 10:10:20.660461178 +0000 UTC m=+0.025456802 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:10:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:20 compute-0 nova_compute[254819]: 2025-12-06 10:10:20.814 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c35a0a906865a4663842e5ed6b698da4d1040e57a2b60288990c137c9d3376/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:20 compute-0 podman[267798]: 2025-12-06 10:10:20.840577602 +0000 UTC m=+0.205573206 container init 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:20 compute-0 podman[267798]: 2025-12-06 10:10:20.852345046 +0000 UTC m=+0.217340650 container start 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:10:20 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : New worker (267817) forked
Dec 06 10:10:20 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : Loading success.
Dec 06 10:10:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:10:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:10:20 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:20.938 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:10:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:21 compute-0 ceph-mon[74327]: pgmap v876: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.479 254824 DEBUG nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.480 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG oslo_concurrency.lockutils [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 DEBUG nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:10:22 compute-0 nova_compute[254819]: 2025-12-06 10:10:22.481 254824 WARNING nova.compute.manager [req-4bf544c5-3e1b-4f4c-9974-34dcac780633 req-773d7408-990f-4411-b08e-4f165163fd73 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state active and task_state None.
Dec 06 10:10:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:23 compute-0 sudo[267828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:23 compute-0 sudo[267828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:23 compute-0 sudo[267828]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:23.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:23 compute-0 sudo[267853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:10:23 compute-0 sudo[267853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:10:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:10:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:23 compute-0 nova_compute[254819]: 2025-12-06 10:10:23.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:10:23
Dec 06 10:10:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:10:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:10:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.control']
Dec 06 10:10:23 compute-0 ceph-mon[74327]: pgmap v877: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:10:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:23 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:23 compute-0 sudo[267853]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:10:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:10:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:24 compute-0 sudo[267911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:24 compute-0 sudo[267911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:24 compute-0 sudo[267911]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:24 compute-0 sudo[267936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- inventory --format=json-pretty --filter-for-batch
Dec 06 10:10:24 compute-0 sudo[267936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:10:24 compute-0 NetworkManager[48882]: <info>  [1765015824.3456] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec 06 10:10:24 compute-0 NetworkManager[48882]: <info>  [1765015824.3471] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.344 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:24 compute-0 ovn_controller[152417]: 2025-12-06T10:10:24Z|00070|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec 06 10:10:24 compute-0 ovn_controller[152417]: 2025-12-06T10:10:24Z|00071|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:10:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:10:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:24 compute-0 podman[268003]: 2025-12-06 10:10:24.725013307 +0000 UTC m=+0.062631444 container create 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:10:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:24 compute-0 systemd[1]: Started libpod-conmon-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope.
Dec 06 10:10:24 compute-0 podman[268003]: 2025-12-06 10:10:24.696367502 +0000 UTC m=+0.033985659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.808 254824 DEBUG nova.compute.manager [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.808 254824 DEBUG nova.compute.manager [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:24 compute-0 nova_compute[254819]: 2025-12-06 10:10:24.809 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:10:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:10:24 compute-0 podman[268003]: 2025-12-06 10:10:24.850038509 +0000 UTC m=+0.187656656 container init 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 10:10:24 compute-0 podman[268003]: 2025-12-06 10:10:24.864857275 +0000 UTC m=+0.202475412 container start 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:24 compute-0 podman[268003]: 2025-12-06 10:10:24.868469972 +0000 UTC m=+0.206088099 container attach 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:10:24 compute-0 condescending_rhodes[268020]: 167 167
Dec 06 10:10:24 compute-0 systemd[1]: libpod-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope: Deactivated successfully.
Dec 06 10:10:24 compute-0 conmon[268020]: conmon 6fe5c235c7eef7743df3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope/container/memory.events
Dec 06 10:10:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:24 compute-0 podman[268025]: 2025-12-06 10:10:24.934003453 +0000 UTC m=+0.038446819 container died 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 10:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9115c5e0e2ed4420303d811e5ef9dcd85e1bd3007e7c9b9528c62aa6af1814d-merged.mount: Deactivated successfully.
Dec 06 10:10:24 compute-0 podman[268025]: 2025-12-06 10:10:24.977171686 +0000 UTC m=+0.081615042 container remove 6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:10:24 compute-0 systemd[1]: libpod-conmon-6fe5c235c7eef7743df3afeb80ef9ea91f26622c9f0959d6d971d3a74b85fd07.scope: Deactivated successfully.
Dec 06 10:10:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:25 compute-0 podman[268047]: 2025-12-06 10:10:25.203550727 +0000 UTC m=+0.066297032 container create 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:10:25 compute-0 systemd[1]: Started libpod-conmon-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope.
Dec 06 10:10:25 compute-0 podman[268047]: 2025-12-06 10:10:25.165990733 +0000 UTC m=+0.028737048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:25 compute-0 podman[268047]: 2025-12-06 10:10:25.333754687 +0000 UTC m=+0.196501032 container init 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:10:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:25 compute-0 podman[268047]: 2025-12-06 10:10:25.342613054 +0000 UTC m=+0.205359339 container start 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:25 compute-0 podman[268047]: 2025-12-06 10:10:25.347238748 +0000 UTC m=+0.209985093 container attach 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:10:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:10:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:10:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:25 compute-0 nova_compute[254819]: 2025-12-06 10:10:25.860 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:10:25 compute-0 nova_compute[254819]: 2025-12-06 10:10:25.862 254824 DEBUG nova.network.neutron [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:25 compute-0 nova_compute[254819]: 2025-12-06 10:10:25.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:25 compute-0 nova_compute[254819]: 2025-12-06 10:10:25.885 254824 DEBUG oslo_concurrency.lockutils [req-d310d941-2466-4420-bab1-37f43ed63ad7 req-3a09d5c2-798a-404b-a6df-cc5ffb5cd022 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:25 compute-0 ceph-mon[74327]: pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:10:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:26 compute-0 relaxed_colden[268063]: [
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:     {
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "available": false,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "being_replaced": false,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "ceph_device_lvm": false,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "lsm_data": {},
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "lvs": [],
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "path": "/dev/sr0",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "rejected_reasons": [
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "Has a FileSystem",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "Insufficient space (<5GB)"
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         ],
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         "sys_api": {
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "actuators": null,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "device_nodes": [
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:                 "sr0"
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             ],
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "devname": "sr0",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "human_readable_size": "482.00 KB",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "id_bus": "ata",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "model": "QEMU DVD-ROM",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "nr_requests": "2",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "parent": "/dev/sr0",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "partitions": {},
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "path": "/dev/sr0",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "removable": "1",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "rev": "2.5+",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "ro": "0",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "rotational": "1",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "sas_address": "",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "sas_device_handle": "",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "scheduler_mode": "mq-deadline",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "sectors": 0,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "sectorsize": "2048",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "size": 493568.0,
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "support_discard": "2048",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "type": "disk",
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:             "vendor": "QEMU"
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:         }
Dec 06 10:10:26 compute-0 relaxed_colden[268063]:     }
Dec 06 10:10:26 compute-0 relaxed_colden[268063]: ]
Dec 06 10:10:26 compute-0 systemd[1]: libpod-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope: Deactivated successfully.
Dec 06 10:10:26 compute-0 podman[268047]: 2025-12-06 10:10:26.05377193 +0000 UTC m=+0.916518225 container died 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abd83fc2a8a413cdb1da6a6d03d890b9b2545f64da0ee81224228f974a411e4-merged.mount: Deactivated successfully.
Dec 06 10:10:26 compute-0 podman[268047]: 2025-12-06 10:10:26.096725738 +0000 UTC m=+0.959472033 container remove 2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_colden, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 10:10:26 compute-0 systemd[1]: libpod-conmon-2f38f85bc760d8b82fa2b36cb32616430937f95c08487486b772017c38b701ce.scope: Deactivated successfully.
Dec 06 10:10:26 compute-0 sudo[267936]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:10:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:10:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:10:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:10:26 compute-0 sudo[269314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:26 compute-0 sudo[269314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:26 compute-0 sudo[269314]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:26 compute-0 sudo[269339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:10:26 compute-0 sudo[269339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.815026875 +0000 UTC m=+0.041576311 container create 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:10:26 compute-0 systemd[1]: Started libpod-conmon-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope.
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.797385774 +0000 UTC m=+0.023935230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.918038138 +0000 UTC m=+0.144587584 container init 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.931241372 +0000 UTC m=+0.157790808 container start 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 10:10:26 compute-0 systemd[1]: libpod-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope: Deactivated successfully.
Dec 06 10:10:26 compute-0 heuristic_mclean[269419]: 167 167
Dec 06 10:10:26 compute-0 conmon[269419]: conmon 90dce362cad02980bb2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope/container/memory.events
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.937309974 +0000 UTC m=+0.163859460 container attach 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.944503196 +0000 UTC m=+0.171052632 container died 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 10:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-69e147027509bd18db6962f8365114ebde36d795824121985e7edebdc0a9ca20-merged.mount: Deactivated successfully.
Dec 06 10:10:26 compute-0 podman[269403]: 2025-12-06 10:10:26.990104145 +0000 UTC m=+0.216653591 container remove 90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:10:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:27 compute-0 systemd[1]: libpod-conmon-90dce362cad02980bb2cbe72548d8b1dcf1f12ce5a51f1e98285c15b384b7503.scope: Deactivated successfully.
Dec 06 10:10:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.193012217 +0000 UTC m=+0.045775034 container create e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec 06 10:10:27 compute-0 systemd[1]: Started libpod-conmon-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope.
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.172928121 +0000 UTC m=+0.025690958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.293150504 +0000 UTC m=+0.145913321 container init e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 10:10:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:27.295Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.304699082 +0000 UTC m=+0.157461909 container start e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.309193743 +0000 UTC m=+0.161956560 container attach e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:10:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:27 compute-0 cool_williams[269462]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:10:27 compute-0 cool_williams[269462]: --> All data devices are unavailable
Dec 06 10:10:27 compute-0 systemd[1]: libpod-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope: Deactivated successfully.
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.685572902 +0000 UTC m=+0.538335759 container died e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf4e2e2fd6a9ef5a39733a5f4d5c1e533afbc4abea202f37ed114b5543c7fa8f-merged.mount: Deactivated successfully.
Dec 06 10:10:27 compute-0 podman[269444]: 2025-12-06 10:10:27.751674929 +0000 UTC m=+0.604437746 container remove e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:10:27 compute-0 systemd[1]: libpod-conmon-e16f556e4fa9d4aa422839e35eafcc054d9dc063adaa1829948945eac03148ce.scope: Deactivated successfully.
Dec 06 10:10:27 compute-0 sudo[269339]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:10:27 compute-0 ceph-mon[74327]: pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:10:27 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:10:27 compute-0 sudo[269493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:27 compute-0 sudo[269493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:27 compute-0 sudo[269493]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:27 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:27.942 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:27 compute-0 sudo[269518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:10:27 compute-0 sudo[269518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec 06 10:10:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.537767487 +0000 UTC m=+0.048168018 container create c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:10:28 compute-0 systemd[1]: Started libpod-conmon-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope.
Dec 06 10:10:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.51613746 +0000 UTC m=+0.026538051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.622691948 +0000 UTC m=+0.133092549 container init c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.628841962 +0000 UTC m=+0.139242503 container start c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:28 compute-0 stupefied_jang[269602]: 167 167
Dec 06 10:10:28 compute-0 systemd[1]: libpod-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope: Deactivated successfully.
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.634056601 +0000 UTC m=+0.144457152 container attach c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.634778711 +0000 UTC m=+0.145179242 container died c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-574308e2ecc55184b698d3022324edb2188a4980fe2ce005678cffa7700c15fc-merged.mount: Deactivated successfully.
Dec 06 10:10:28 compute-0 podman[269586]: 2025-12-06 10:10:28.673546887 +0000 UTC m=+0.183947448 container remove c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:10:28 compute-0 systemd[1]: libpod-conmon-c2acfee8c29a8d211bc4939009dd3dbbbadca52355d08fc0f62c80f743560d16.scope: Deactivated successfully.
Dec 06 10:10:28 compute-0 nova_compute[254819]: 2025-12-06 10:10:28.884 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:28 compute-0 podman[269627]: 2025-12-06 10:10:28.905122796 +0000 UTC m=+0.055384191 container create a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:28 compute-0 systemd[1]: Started libpod-conmon-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope.
Dec 06 10:10:28 compute-0 podman[269627]: 2025-12-06 10:10:28.878157515 +0000 UTC m=+0.028418950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:29 compute-0 podman[269627]: 2025-12-06 10:10:29.004177033 +0000 UTC m=+0.154438468 container init a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Dec 06 10:10:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:29.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:29 compute-0 podman[269627]: 2025-12-06 10:10:29.017006507 +0000 UTC m=+0.167267892 container start a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:10:29 compute-0 podman[269627]: 2025-12-06 10:10:29.021617339 +0000 UTC m=+0.171878804 container attach a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec 06 10:10:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:29 compute-0 jolly_benz[269643]: {
Dec 06 10:10:29 compute-0 jolly_benz[269643]:     "1": [
Dec 06 10:10:29 compute-0 jolly_benz[269643]:         {
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "devices": [
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "/dev/loop3"
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             ],
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "lv_name": "ceph_lv0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "lv_size": "21470642176",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "name": "ceph_lv0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "tags": {
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.cluster_name": "ceph",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.crush_device_class": "",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.encrypted": "0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.osd_id": "1",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.type": "block",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.vdo": "0",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:                 "ceph.with_tpm": "0"
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             },
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "type": "block",
Dec 06 10:10:29 compute-0 jolly_benz[269643]:             "vg_name": "ceph_vg0"
Dec 06 10:10:29 compute-0 jolly_benz[269643]:         }
Dec 06 10:10:29 compute-0 jolly_benz[269643]:     ]
Dec 06 10:10:29 compute-0 jolly_benz[269643]: }
Dec 06 10:10:29 compute-0 systemd[1]: libpod-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope: Deactivated successfully.
Dec 06 10:10:29 compute-0 podman[269627]: 2025-12-06 10:10:29.340881142 +0000 UTC m=+0.491142547 container died a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 10:10:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a786c0ee1e90bb90eb7661e004ee583a7ffc3963ace5ebb315a7db85c043bd0-merged.mount: Deactivated successfully.
Dec 06 10:10:29 compute-0 podman[269627]: 2025-12-06 10:10:29.392931103 +0000 UTC m=+0.543192528 container remove a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_benz, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 10:10:29 compute-0 systemd[1]: libpod-conmon-a69fa3e9f33642257c19c6797ba6f201d4c3c1c452f72105a7c07179a65ab0c1.scope: Deactivated successfully.
Dec 06 10:10:29 compute-0 sudo[269518]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:29 compute-0 sudo[269666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:29 compute-0 sudo[269666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:29 compute-0 sudo[269666]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:29 compute-0 sudo[269691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:10:29 compute-0 sudo[269691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:29 compute-0 ceph-mon[74327]: pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.023217578 +0000 UTC m=+0.044901561 container create f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:30 compute-0 systemd[1]: Started libpod-conmon-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope.
Dec 06 10:10:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.007265232 +0000 UTC m=+0.028949244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.106495893 +0000 UTC m=+0.128179885 container init f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.112300369 +0000 UTC m=+0.133984351 container start f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.116417999 +0000 UTC m=+0.138102001 container attach f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 10:10:30 compute-0 awesome_satoshi[269773]: 167 167
Dec 06 10:10:30 compute-0 systemd[1]: libpod-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope: Deactivated successfully.
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.120258241 +0000 UTC m=+0.141942223 container died f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-61b4bfc50a87152324f77b3d413c090c173aa823159075c0bc668bda44058abf-merged.mount: Deactivated successfully.
Dec 06 10:10:30 compute-0 podman[269757]: 2025-12-06 10:10:30.159362137 +0000 UTC m=+0.181046119 container remove f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 10:10:30 compute-0 systemd[1]: libpod-conmon-f96c88ad364252e4913c46cba68de5437315dabd99fa5183903252b301ff02ec.scope: Deactivated successfully.
Dec 06 10:10:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec 06 10:10:30 compute-0 podman[269795]: 2025-12-06 10:10:30.344102934 +0000 UTC m=+0.043807331 container create 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 10:10:30 compute-0 systemd[1]: Started libpod-conmon-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope.
Dec 06 10:10:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:30 compute-0 podman[269795]: 2025-12-06 10:10:30.327511401 +0000 UTC m=+0.027215818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:10:30 compute-0 podman[269795]: 2025-12-06 10:10:30.430166604 +0000 UTC m=+0.129871001 container init 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:30 compute-0 podman[269795]: 2025-12-06 10:10:30.438450506 +0000 UTC m=+0.138154903 container start 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:10:30 compute-0 podman[269795]: 2025-12-06 10:10:30.442003951 +0000 UTC m=+0.141708338 container attach 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:10:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:30 compute-0 nova_compute[254819]: 2025-12-06 10:10:30.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:10:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:10:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:31 compute-0 lvm[269888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:10:31 compute-0 lvm[269888]: VG ceph_vg0 finished
Dec 06 10:10:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:31 compute-0 upbeat_montalcini[269812]: {}
Dec 06 10:10:31 compute-0 systemd[1]: libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Deactivated successfully.
Dec 06 10:10:31 compute-0 systemd[1]: libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Consumed 1.260s CPU time.
Dec 06 10:10:31 compute-0 conmon[269812]: conmon 70ceb86202459e215321 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope/container/memory.events
Dec 06 10:10:31 compute-0 podman[269795]: 2025-12-06 10:10:31.226004164 +0000 UTC m=+0.925708551 container died 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Dec 06 10:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f35c4987b866724c1b5d2446cfb92cf1cfee69b5caf0d3f90dbb2889855ea4c0-merged.mount: Deactivated successfully.
Dec 06 10:10:31 compute-0 podman[269795]: 2025-12-06 10:10:31.282232357 +0000 UTC m=+0.981936764 container remove 70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:31 compute-0 systemd[1]: libpod-conmon-70ceb86202459e2153213a4bca0b2491eedc414c7b9720f11db18ebf2a19794e.scope: Deactivated successfully.
Dec 06 10:10:31 compute-0 sudo[269691]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:10:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:10:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:31 compute-0 sudo[269905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:10:31 compute-0 sudo[269905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:31 compute-0 sudo[269905]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:31 compute-0 ceph-mon[74327]: pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec 06 10:10:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:10:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec 06 10:10:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:32 compute-0 sudo[269931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:10:32 compute-0 sudo[269931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:32 compute-0 sudo[269931]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:33 compute-0 nova_compute[254819]: 2025-12-06 10:10:33.889 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:33 compute-0 ceph-mon[74327]: pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 78 op/s
Dec 06 10:10:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Dec 06 10:10:34 compute-0 ovn_controller[152417]: 2025-12-06T10:10:34Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:9d:d4 10.100.0.14
Dec 06 10:10:34 compute-0 ovn_controller[152417]: 2025-12-06T10:10:34Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:9d:d4 10.100.0.14
Dec 06 10:10:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 10:10:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 10:10:35 compute-0 podman[269959]: 2025-12-06 10:10:35.445472925 +0000 UTC m=+0.067465364 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:35 compute-0 nova_compute[254819]: 2025-12-06 10:10:35.932 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:35 compute-0 ceph-mon[74327]: pgmap v883: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Dec 06 10:10:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Dec 06 10:10:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:37.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:37 compute-0 ceph-mon[74327]: pgmap v884: 337 pgs: 337 active+clean; 109 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 2.2 MiB/s wr, 35 op/s
Dec 06 10:10:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:10:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:38 compute-0 nova_compute[254819]: 2025-12-06 10:10:38.895 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:10:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:39.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:39 compute-0 ceph-mon[74327]: pgmap v885: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:10:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:10:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:40 compute-0 podman[269985]: 2025-12-06 10:10:40.540438716 +0000 UTC m=+0.154566672 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:10:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec 06 10:10:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec 06 10:10:40 compute-0 nova_compute[254819]: 2025-12-06 10:10:40.937 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:41 compute-0 ceph-mon[74327]: pgmap v886: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:10:41 compute-0 nova_compute[254819]: 2025-12-06 10:10:41.321 254824 INFO nova.compute.manager [None req-b2119cf0-fba3-46d3-9d41-5774c762d718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Get console output
Dec 06 10:10:41 compute-0 nova_compute[254819]: 2025-12-06 10:10:41.327 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:10:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:41 compute-0 nova_compute[254819]: 2025-12-06 10:10:41.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:10:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.760 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.789 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.789 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:10:42 compute-0 nova_compute[254819]: 2025-12-06 10:10:42.790 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:10:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132234717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.238 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:43 compute-0 ceph-mon[74327]: pgmap v887: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:10:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/132234717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.305 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.306 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:10:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.499 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4322MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.565 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.637 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.687 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.687 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.705 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.725 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.763 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:43 compute-0 nova_compute[254819]: 2025-12-06 10:10:43.897 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:10:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976439775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.213 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.220 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.237 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.260 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.260 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3976439775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.748 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.749 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:44 compute-0 nova_compute[254819]: 2025-12-06 10:10:44.750 254824 DEBUG nova.objects.instance [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:10:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:45 compute-0 nova_compute[254819]: 2025-12-06 10:10:45.343 254824 DEBUG nova.objects.instance [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_requests' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:10:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:45 compute-0 nova_compute[254819]: 2025-12-06 10:10:45.368 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:10:45 compute-0 podman[270061]: 2025-12-06 10:10:45.4515251 +0000 UTC m=+0.073320540 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 10:10:45 compute-0 nova_compute[254819]: 2025-12-06 10:10:45.515 254824 DEBUG nova.policy [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:10:45 compute-0 nova_compute[254819]: 2025-12-06 10:10:45.941 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 106 KiB/s wr, 31 op/s
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.243 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.266 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully created port: 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.270 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:10:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:10:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:10:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:10:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.498 254824 INFO nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating ports in neutron
Dec 06 10:10:46 compute-0 nova_compute[254819]: 2025-12-06 10:10:46.669 254824 INFO nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 10:10:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:46 compute-0 ceph-mon[74327]: pgmap v888: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:10:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:47.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.536 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Successfully updated port: 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.550 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG nova.compute.manager [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG nova.compute.manager [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:10:47 compute-0 nova_compute[254819]: 2025-12-06 10:10:47.663 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2163291013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:47 compute-0 ceph-mon[74327]: pgmap v889: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 106 KiB/s wr, 31 op/s
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/755035782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3587187352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2493443694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2299801742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:10:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 111 KiB/s wr, 31 op/s
Dec 06 10:10:48 compute-0 nova_compute[254819]: 2025-12-06 10:10:48.272 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:48 compute-0 nova_compute[254819]: 2025-12-06 10:10:48.899 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:49.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:49.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:49 compute-0 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG nova.compute.manager [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:49 compute-0 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG nova.compute.manager [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:10:49 compute-0 nova_compute[254819]: 2025-12-06 10:10:49.770 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:10:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:49 compute-0 ceph-mon[74327]: pgmap v890: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 111 KiB/s wr, 31 op/s
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.171 254824 DEBUG nova.network.neutron [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.194 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.196 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.197 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.202 254824 DEBUG nova.virt.libvirt.vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.203 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.204 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.205 254824 DEBUG os_vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.206 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.207 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.208 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.211 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.212 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88b1b4c6-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.213 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88b1b4c6-36, col_values=(('external_ids', {'iface-id': '88b1b4c6-36ba-46c8-baa2-da5b266af4d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:5b:44', 'vm-uuid': '467f8e9a-e166-409e-920c-689fea4ea3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.2173] manager: (tap88b1b4c6-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.223 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.227 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.228 254824 INFO os_vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.229 254824 DEBUG nova.virt.libvirt.vif [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.229 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.230 254824 DEBUG nova.network.os_vif_util [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.232 254824 DEBUG nova.virt.libvirt.guest [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] attach device xml: <interface type="ethernet">
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:5b:44"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <target dev="tap88b1b4c6-36"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]: </interface>
Dec 06 10:10:50 compute-0 nova_compute[254819]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 10:10:50 compute-0 kernel: tap88b1b4c6-36: entered promiscuous mode
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.2481] manager: (tap88b1b4c6-36): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec 06 10:10:50 compute-0 ovn_controller[152417]: 2025-12-06T10:10:50Z|00072|binding|INFO|Claiming lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for this chassis.
Dec 06 10:10:50 compute-0 ovn_controller[152417]: 2025-12-06T10:10:50Z|00073|binding|INFO|88b1b4c6-36ba-46c8-baa2-da5b266af4d1: Claiming fa:16:3e:9c:5b:44 10.100.0.24
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.250 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.258 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:5b:44 10.100.0.24'], port_security=['fa:16:3e:9c:5b:44 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5b6720-4878-43e8-9823-306ee6c3568e, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=88b1b4c6-36ba-46c8-baa2-da5b266af4d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.259 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 in datapath af11da89-c29d-4ef1-80d5-4b619757b0ff bound to our chassis
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.261 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af11da89-c29d-4ef1-80d5-4b619757b0ff
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.277 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[729d96c5-5f8c-4cae-a435-2987bbfb7bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.278 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf11da89-c1 in ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.280 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf11da89-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.281 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[076e7d09-fd67-48bb-897e-8a882a943f5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.282 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b9ce65-9f21-46cd-9c0e-d246d488cdb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.296 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[da7178b2-ac6d-4f6a-a87b-b56c4b380053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 systemd-udevd[270097]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.322 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:64:9d:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.323 254824 DEBUG nova.virt.libvirt.driver [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:9c:5b:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.326 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.325 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8b72e194-6753-4f67-a646-bd9e52d85640]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.3295] device (tap88b1b4c6-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:10:50 compute-0 ovn_controller[152417]: 2025-12-06T10:10:50Z|00074|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 ovn-installed in OVS
Dec 06 10:10:50 compute-0 ovn_controller[152417]: 2025-12-06T10:10:50Z|00075|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 up in Southbound
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.3305] device (tap88b1b4c6-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.331 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.357 254824 DEBUG nova.virt.libvirt.guest [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:10:50 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec 06 10:10:50 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec 06 10:10:50 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:10:50 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:10:50 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:10:50 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.366 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cb642a-b045-43bf-a5de-70926e889be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.371 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4b999e-3aa0-4db8-98af-b1c4d7945864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 systemd-udevd[270099]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.3719] manager: (tapaf11da89-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.378 254824 DEBUG oslo_concurrency.lockutils [None req-7f16c6ac-3b6d-4683-bc4b-5ce95884b479 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.400 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[453b2ea9-59bf-41b9-ad56-1aa72752722c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.403 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[55f98480-50f2-4a9f-b744-568e1f91e34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.4241] device (tapaf11da89-c0): carrier: link connected
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.432 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[03b06b54-9702-4b7e-80a8-586c1653cefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.449 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[54e7aa72-08d8-4232-8434-d185af79fb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf11da89-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:fe:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422091, 'reachable_time': 33829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270121, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.461 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0e12f130-b6ba-4f52-82e1-e0bfb2546120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:fe2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422091, 'tstamp': 422091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270122, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.479 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[758e41a8-4976-4934-acc5-4da1bfa7bf97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf11da89-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:fe:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422091, 'reachable_time': 33829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270123, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.507 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1fdc9e-63b2-4321-86cc-87217f4326d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.559 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[26102cdd-05e8-46e2-9f49-7a8eed779aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.560 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf11da89-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.561 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.561 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf11da89-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 kernel: tapaf11da89-c0: entered promiscuous mode
Dec 06 10:10:50 compute-0 NetworkManager[48882]: <info>  [1765015850.5645] manager: (tapaf11da89-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.566 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf11da89-c0, col_values=(('external_ids', {'iface-id': '11d93e6a-f3e6-434c-bb3f-39cb96f417cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:10:50 compute-0 ovn_controller[152417]: 2025-12-06T10:10:50Z|00076|binding|INFO|Releasing lport 11d93e6a-f3e6-434c-bb3f-39cb96f417cf from this chassis (sb_readonly=0)
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.588 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.589 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[58b1cd74-e926-4793-b197-93746faa3cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.590 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-af11da89-c29d-4ef1-80d5-4b619757b0ff
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/af11da89-c29d-4ef1-80d5-4b619757b0ff.pid.haproxy
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID af11da89-c29d-4ef1-80d5-4b619757b0ff
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:10:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:50.591 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'env', 'PROCESS_TAG=haproxy-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af11da89-c29d-4ef1-80d5-4b619757b0ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:10:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec 06 10:10:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Dec 06 10:10:50 compute-0 nova_compute[254819]: 2025-12-06 10:10:50.943 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:50 compute-0 podman[270155]: 2025-12-06 10:10:50.956564699 +0000 UTC m=+0.054610710 container create 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:50 compute-0 systemd[1]: Started libpod-conmon-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope.
Dec 06 10:10:51 compute-0 podman[270155]: 2025-12-06 10:10:50.926230258 +0000 UTC m=+0.024276319 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:10:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:10:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d556cdea8f60788a15893bebf04b8e9b5c638ceed2e80d5a7f1c58c122409c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:10:51 compute-0 podman[270155]: 2025-12-06 10:10:51.042036344 +0000 UTC m=+0.140082375 container init 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:10:51 compute-0 podman[270155]: 2025-12-06 10:10:51.047440908 +0000 UTC m=+0.145486919 container start 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:10:51 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : New worker (270177) forked
Dec 06 10:10:51 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : Loading success.
Dec 06 10:10:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:51.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.939 254824 DEBUG nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:51 compute-0 ceph-mon[74327]: pgmap v891: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.940 254824 DEBUG oslo_concurrency.lockutils [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.941 254824 DEBUG nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:10:51 compute-0 nova_compute[254819]: 2025-12-06 10:10:51.941 254824 WARNING nova.compute.manager [req-36770c0c-efab-49a7-ba2f-3bef0b768c1f req-b40e13f9-7f61-4a46-b96b-0158507b863e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.
Dec 06 10:10:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec 06 10:10:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:52 compute-0 sudo[270188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:10:52 compute-0 sudo[270188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:10:52 compute-0 sudo[270188]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.124 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.124 254824 DEBUG nova.network.neutron [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.145 254824 DEBUG oslo_concurrency.lockutils [req-0d6f2182-3476-46c0-8e17-c056e0bc4fc1 req-fda49460-1d8d-44c1-95d6-29e4bbd58315 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.146 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.147 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:10:53 compute-0 nova_compute[254819]: 2025-12-06 10:10:53.147 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:10:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:53 compute-0 ovn_controller[152417]: 2025-12-06T10:10:53Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:5b:44 10.100.0.24
Dec 06 10:10:53 compute-0 ovn_controller[152417]: 2025-12-06T10:10:53Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:5b:44 10.100.0.24
Dec 06 10:10:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:53.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:53 compute-0 ceph-mon[74327]: pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Dec 06 10:10:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:10:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.005 254824 DEBUG nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.006 254824 DEBUG oslo_concurrency.lockutils [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.007 254824 DEBUG nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:10:54 compute-0 nova_compute[254819]: 2025-12-06 10:10:54.007 254824 WARNING nova.compute.manager [req-4ac39618-58c9-4bc1-b947-af8a00cba19e req-3987cc29-402c-4c10-bd4b-09b6552a3849 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:10:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Dec 06 10:10:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.242 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:10:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:10:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:10:54 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 06 10:10:54 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:54.984140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:10:54 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 06 10:10:54 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015854984200, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 4329660, "memory_usage": 4393624, "flush_reason": "Manual Compaction"}
Dec 06 10:10:54 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855008084, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4184524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24856, "largest_seqno": 26985, "table_properties": {"data_size": 4174854, "index_size": 6100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20016, "raw_average_key_size": 20, "raw_value_size": 4155559, "raw_average_value_size": 4236, "num_data_blocks": 267, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015644, "oldest_key_time": 1765015644, "file_creation_time": 1765015854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24027 microseconds, and 9523 cpu microseconds.
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.008166) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4184524 bytes OK
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.008197) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026893) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026925) EVENT_LOG_v1 {"time_micros": 1765015855026917, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.026949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4320895, prev total WAL file size 4320895, number of live WAL files 2.
Dec 06 10:10:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.029286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4086KB)], [56(12MB)]
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855029364, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17518425, "oldest_snapshot_seqno": -1}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5891 keys, 15448072 bytes, temperature: kUnknown
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855183607, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15448072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15407288, "index_size": 24930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149649, "raw_average_key_size": 25, "raw_value_size": 15299356, "raw_average_value_size": 2597, "num_data_blocks": 1018, "num_entries": 5891, "num_filter_entries": 5891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.184092) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15448072 bytes
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.185725) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.5 rd, 100.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.7 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6411, records dropped: 520 output_compression: NoCompression
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.185761) EVENT_LOG_v1 {"time_micros": 1765015855185743, "job": 30, "event": "compaction_finished", "compaction_time_micros": 154363, "compaction_time_cpu_micros": 55100, "output_level": 6, "num_output_files": 1, "total_output_size": 15448072, "num_input_records": 6411, "num_output_records": 5891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855187424, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015855192155, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.029078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:10:55.192368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:10:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:10:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.803 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.824 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.825 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.825 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.826 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.827 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.827 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.853 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.854 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.854 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:10:55 compute-0 ceph-mon[74327]: pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Dec 06 10:10:55 compute-0 nova_compute[254819]: 2025-12-06 10:10:55.994 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:10:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 8.3 KiB/s wr, 1 op/s
Dec 06 10:10:56 compute-0 nova_compute[254819]: 2025-12-06 10:10:56.365 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:57 compute-0 ceph-mon[74327]: pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 8.3 KiB/s wr, 1 op/s
Dec 06 10:10:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:57.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:10:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:57 compute-0 nova_compute[254819]: 2025-12-06 10:10:57.574 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:10:57 compute-0 nova_compute[254819]: 2025-12-06 10:10:57.575 254824 DEBUG nova.network.neutron [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:10:57 compute-0 nova_compute[254819]: 2025-12-06 10:10:57.592 254824 DEBUG oslo_concurrency.lockutils [req-78cea3e8-0199-4ffa-9012-daeb983068eb req-433066e4-e95d-48e5-a2b9-3c71bbaf303d d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:10:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Dec 06 10:10:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:59.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:10:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:10:59.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:10:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:10:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:10:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:10:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:10:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:10:59 compute-0 ceph-mon[74327]: pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Dec 06 10:10:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:10:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:10:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:10:59.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:10:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec 06 10:11:00 compute-0 nova_compute[254819]: 2025-12-06 10:11:00.219 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Dec 06 10:11:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Dec 06 10:11:00 compute-0 nova_compute[254819]: 2025-12-06 10:11:00.990 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:01 compute-0 ceph-mon[74327]: pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec 06 10:11:01 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/938827649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:11:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:01.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:11:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec 06 10:11:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:03 compute-0 ceph-mon[74327]: pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec 06 10:11:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:03.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 10:11:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:05 compute-0 nova_compute[254819]: 2025-12-06 10:11:05.222 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:05 compute-0 ceph-mon[74327]: pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 10:11:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/737373580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:11:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:05.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:05 compute-0 nova_compute[254819]: 2025-12-06 10:11:05.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:11:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/301725092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:11:06 compute-0 podman[270227]: 2025-12-06 10:11:06.485735175 +0000 UTC m=+0.107132705 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:11:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:07.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:07 compute-0 ceph-mon[74327]: pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:11:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:07.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:11:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:09.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:09 compute-0 ceph-mon[74327]: pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:10 compute-0 nova_compute[254819]: 2025-12-06 10:11:10.226 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:11:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:11:10 compute-0 nova_compute[254819]: 2025-12-06 10:11:10.996 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:11.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:11 compute-0 ceph-mon[74327]: pgmap v901: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:11 compute-0 podman[270253]: 2025-12-06 10:11:11.500365988 +0000 UTC m=+0.135166144 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:11:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:12 compute-0 sudo[270280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:11:12 compute-0 sudo[270280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:12 compute-0 sudo[270280]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:13.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:13 compute-0 ceph-mon[74327]: pgmap v902: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:11:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:11:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:15 compute-0 nova_compute[254819]: 2025-12-06 10:11:15.229 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:15 compute-0 ceph-mon[74327]: pgmap v903: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:11:16 compute-0 nova_compute[254819]: 2025-12-06 10:11:16.000 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Dec 06 10:11:16 compute-0 podman[270309]: 2025-12-06 10:11:16.470037318 +0000 UTC m=+0.088876956 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:11:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:17.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:17.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:17.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:17 compute-0 ceph-mon[74327]: pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Dec 06 10:11:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 75 op/s
Dec 06 10:11:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:19.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:11:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:11:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:19 compute-0 ceph-mon[74327]: pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 75 op/s
Dec 06 10:11:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec 06 10:11:20 compute-0 nova_compute[254819]: 2025-12-06 10:11:20.232 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:11:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:11:21 compute-0 nova_compute[254819]: 2025-12-06 10:11:21.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:21.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:21 compute-0 ceph-mon[74327]: pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec 06 10:11:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec 06 10:11:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:23 compute-0 ceph-mon[74327]: pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 64 op/s
Dec 06 10:11:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:11:23
Dec 06 10:11:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:11:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:11:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.nfs', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 06 10:11:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:11:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:11:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015184825120466237 of space, bias 1.0, pg target 0.4555447536139871 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:11:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:11:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:11:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.234 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:25.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.320 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:25.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.483 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Triggering sync for uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.484 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:25 compute-0 ceph-mon[74327]: pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Dec 06 10:11:25 compute-0 nova_compute[254819]: 2025-12-06 10:11:25.526 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:26 compute-0 nova_compute[254819]: 2025-12-06 10:11:26.005 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:11:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:27.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:27.302Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:11:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:27.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:11:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:27 compute-0 ceph-mon[74327]: pgmap v909: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:11:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:28 compute-0 nova_compute[254819]: 2025-12-06 10:11:28.209 254824 DEBUG nova.compute.manager [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:28 compute-0 nova_compute[254819]: 2025-12-06 10:11:28.209 254824 DEBUG nova.compute.manager [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-88b1b4c6-36ba-46c8-baa2-da5b266af4d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:11:28 compute-0 nova_compute[254819]: 2025-12-06 10:11:28.210 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:11:28 compute-0 nova_compute[254819]: 2025-12-06 10:11:28.210 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:11:28 compute-0 nova_compute[254819]: 2025-12-06 10:11:28.211 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:11:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:29.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4000f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:29.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:29 compute-0 ceph-mon[74327]: pgmap v910: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:30 compute-0 nova_compute[254819]: 2025-12-06 10:11:30.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:11:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:11:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:31 compute-0 nova_compute[254819]: 2025-12-06 10:11:31.050 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:31.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:31.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:31 compute-0 ceph-mon[74327]: pgmap v911: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:31 compute-0 sudo[270348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:31 compute-0 sudo[270348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:31 compute-0 sudo[270348]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:31 compute-0 sudo[270373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:11:31 compute-0 sudo[270373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:32 compute-0 nova_compute[254819]: 2025-12-06 10:11:32.227 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:11:32 compute-0 nova_compute[254819]: 2025-12-06 10:11:32.228 254824 DEBUG nova.network.neutron [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:32 compute-0 nova_compute[254819]: 2025-12-06 10:11:32.255 254824 DEBUG oslo_concurrency.lockutils [req-a77f395b-a840-4656-9820-0a99e59bc46c req-4c5ea264-a966-4c18-896e-c07f2cadff37 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:11:32 compute-0 sudo[270373]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:32 compute-0 sudo[270431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:11:32 compute-0 sudo[270431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:32 compute-0 sudo[270431]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:11:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:11:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:33.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:33 compute-0 ceph-mon[74327]: pgmap v912: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 10:11:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:11:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:11:34 compute-0 sudo[270458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:34 compute-0 sudo[270458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:34 compute-0 sudo[270458]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:34 compute-0 sudo[270483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:11:34 compute-0 sudo[270483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:34 compute-0 sshd-session[270552]: banner exchange: Connection from 3.137.73.221 port 35022: invalid format
Dec 06 10:11:34 compute-0 podman[270550]: 2025-12-06 10:11:34.97095114 +0000 UTC m=+0.035186422 container create d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:11:34 compute-0 systemd[1]: Started libpod-conmon-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope.
Dec 06 10:11:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:35.025364694 +0000 UTC m=+0.089599996 container init d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:35.034777726 +0000 UTC m=+0.099013018 container start d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:35.037700414 +0000 UTC m=+0.101935716 container attach d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:11:35 compute-0 lucid_faraday[270568]: 167 167
Dec 06 10:11:35 compute-0 systemd[1]: libpod-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope: Deactivated successfully.
Dec 06 10:11:35 compute-0 conmon[270568]: conmon d61f761404107f8f3ac2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope/container/memory.events
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:35.042150132 +0000 UTC m=+0.106385414 container died d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:11:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:34.955744113 +0000 UTC m=+0.019979415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-86558498d450bae09abf92f6a397e142002ba15df71e3814649f6f961b7cbff1-merged.mount: Deactivated successfully.
Dec 06 10:11:35 compute-0 sshd-session[270549]: banner exchange: Connection from 3.137.73.221 port 35014: invalid format
Dec 06 10:11:35 compute-0 podman[270550]: 2025-12-06 10:11:35.086857127 +0000 UTC m=+0.151092409 container remove d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:11:35 compute-0 systemd[1]: libpod-conmon-d61f761404107f8f3ac2bb36fcbf5a77c89f207b1e886b7366be0dfb6cf60d63.scope: Deactivated successfully.
Dec 06 10:11:35 compute-0 nova_compute[254819]: 2025-12-06 10:11:35.241 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:35.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.273697561 +0000 UTC m=+0.050819599 container create ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:11:35 compute-0 systemd[1]: Started libpod-conmon-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope.
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.247896012 +0000 UTC m=+0.025018080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.38813828 +0000 UTC m=+0.165260328 container init ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.395106555 +0000 UTC m=+0.172228583 container start ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.399233506 +0000 UTC m=+0.176355554 container attach ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 10:11:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:35.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:35 compute-0 ceph-mon[74327]: pgmap v913: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:11:35 compute-0 ceph-mon[74327]: pgmap v914: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:11:35 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:11:35 compute-0 objective_bouman[270609]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:11:35 compute-0 objective_bouman[270609]: --> All data devices are unavailable
Dec 06 10:11:35 compute-0 systemd[1]: libpod-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope: Deactivated successfully.
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.750664698 +0000 UTC m=+0.527786766 container died ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b119782b08159381f6087ba20ba23753269dc82fa9c482aa1ac299faea89ed7-merged.mount: Deactivated successfully.
Dec 06 10:11:35 compute-0 podman[270591]: 2025-12-06 10:11:35.801500617 +0000 UTC m=+0.578622645 container remove ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_bouman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 10:11:35 compute-0 systemd[1]: libpod-conmon-ec976b2c13d86cd677cb60da508b6b10ce4a2b577567c367c5dc8517afc89a64.scope: Deactivated successfully.
Dec 06 10:11:35 compute-0 sudo[270483]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:35 compute-0 sudo[270639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:35 compute-0 sudo[270639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:35 compute-0 sudo[270639]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:35 compute-0 sudo[270664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:11:35 compute-0 sudo[270664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:36 compute-0 nova_compute[254819]: 2025-12-06 10:11:36.089 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.343302238 +0000 UTC m=+0.039236550 container create 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 10:11:36 compute-0 systemd[1]: Started libpod-conmon-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope.
Dec 06 10:11:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.419324269 +0000 UTC m=+0.115258591 container init 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.326790216 +0000 UTC m=+0.022724548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.426294405 +0000 UTC m=+0.122228717 container start 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 10:11:36 compute-0 quizzical_poincare[270746]: 167 167
Dec 06 10:11:36 compute-0 systemd[1]: libpod-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope: Deactivated successfully.
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.43060148 +0000 UTC m=+0.126535812 container attach 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.43097507 +0000 UTC m=+0.126909382 container died 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d510af21d9173d4e932d516d9ee2d9c51b143154d591ea8b12a3f978879cfd95-merged.mount: Deactivated successfully.
Dec 06 10:11:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec 06 10:11:36 compute-0 podman[270729]: 2025-12-06 10:11:36.463329495 +0000 UTC m=+0.159263807 container remove 77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_poincare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:36 compute-0 systemd[1]: libpod-conmon-77c23db6719720b00df2f09205bd480c38b3109bc496c06c6185509bf8e343f3.scope: Deactivated successfully.
Dec 06 10:11:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3176469227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:36 compute-0 podman[270770]: 2025-12-06 10:11:36.68316113 +0000 UTC m=+0.069106408 container create 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:11:36 compute-0 systemd[1]: Started libpod-conmon-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope.
Dec 06 10:11:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:36 compute-0 podman[270770]: 2025-12-06 10:11:36.63710822 +0000 UTC m=+0.023053518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:36 compute-0 podman[270770]: 2025-12-06 10:11:36.782384572 +0000 UTC m=+0.168329880 container init 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:11:36 compute-0 podman[270770]: 2025-12-06 10:11:36.790356295 +0000 UTC m=+0.176301573 container start 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 10:11:36 compute-0 podman[270770]: 2025-12-06 10:11:36.811156821 +0000 UTC m=+0.197102109 container attach 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:11:36 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:36.847 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:11:36 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:36.848 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:11:36 compute-0 nova_compute[254819]: 2025-12-06 10:11:36.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:36 compute-0 podman[270784]: 2025-12-06 10:11:36.872715267 +0000 UTC m=+0.151902321 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 10:11:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:37 compute-0 romantic_shirley[270787]: {
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:     "1": [
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:         {
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "devices": [
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "/dev/loop3"
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             ],
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "lv_name": "ceph_lv0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "lv_size": "21470642176",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "name": "ceph_lv0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "tags": {
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.cluster_name": "ceph",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.crush_device_class": "",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.encrypted": "0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.osd_id": "1",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.type": "block",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.vdo": "0",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:                 "ceph.with_tpm": "0"
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             },
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "type": "block",
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:             "vg_name": "ceph_vg0"
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:         }
Dec 06 10:11:37 compute-0 romantic_shirley[270787]:     ]
Dec 06 10:11:37 compute-0 romantic_shirley[270787]: }
Dec 06 10:11:37 compute-0 systemd[1]: libpod-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope: Deactivated successfully.
Dec 06 10:11:37 compute-0 podman[270770]: 2025-12-06 10:11:37.098385707 +0000 UTC m=+0.484330985 container died 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbda452e09c46447772048cadbddf62ec01b5ef7166f4bb43bdec17fba1fcf56-merged.mount: Deactivated successfully.
Dec 06 10:11:37 compute-0 podman[270770]: 2025-12-06 10:11:37.189637807 +0000 UTC m=+0.575583085 container remove 77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:11:37 compute-0 systemd[1]: libpod-conmon-77eb5f9f3c2a77e21d1847d7d67070446bde264901e027d4ae8d294fc1a826d1.scope: Deactivated successfully.
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.206 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.207 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:37 compute-0 sudo[270664]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.226 254824 DEBUG nova.objects.instance [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:11:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:37.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.265 254824 DEBUG nova.virt.libvirt.vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.265 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.266 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.269 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.271 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.273 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Attempting to detach device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.273 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:5b:44"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <target dev="tap88b1b4c6-36"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </interface>
Dec 06 10:11:37 compute-0 nova_compute[254819]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.279 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.281 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <name>instance-00000006</name>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <system>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </system>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <os>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </os>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <features>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </features>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 sudo[270829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 sudo[270829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:11:37 compute-0 sudo[270829]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:64:9d:d4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='tapec2bc9a6-15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:9c:5b:44'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='tap88b1b4c6-36'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='net1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </target>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </console>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <video>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </video>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </domain>
Dec 06 10:11:37 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 INFO nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the persistent domain config.
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] (1/8): Attempting to detach device tap88b1b4c6-36 with device alias net1 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.282 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] detach device xml: <interface type="ethernet">
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <mac address="fa:16:3e:9c:5b:44"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <model type="virtio"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <mtu size="1442"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <target dev="tap88b1b4c6-36"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </interface>
Dec 06 10:11:37 compute-0 nova_compute[254819]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:11:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:37.305Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:11:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:37.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:11:37 compute-0 kernel: tap88b1b4c6-36 (unregistering): left promiscuous mode
Dec 06 10:11:37 compute-0 NetworkManager[48882]: <info>  [1765015897.3349] device (tap88b1b4c6-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.345 254824 DEBUG nova.virt.libvirt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Received event <DeviceRemovedEvent: 1765015897.3447344, 467f8e9a-e166-409e-920c-689fea4ea3f6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.346 254824 DEBUG nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Start waiting for the detach event from libvirt for device tap88b1b4c6-36 with device alias net1 for instance 467f8e9a-e166-409e-920c-689fea4ea3f6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.346 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:37 compute-0 sudo[270856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:11:37 compute-0 sudo[270856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:37 compute-0 ovn_controller[152417]: 2025-12-06T10:11:37Z|00077|binding|INFO|Releasing lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 from this chassis (sb_readonly=0)
Dec 06 10:11:37 compute-0 ovn_controller[152417]: 2025-12-06T10:11:37Z|00078|binding|INFO|Setting lport 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 down in Southbound
Dec 06 10:11:37 compute-0 ovn_controller[152417]: 2025-12-06T10:11:37Z|00079|binding|INFO|Removing iface tap88b1b4c6-36 ovn-installed in OVS
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.408 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.411 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <name>instance-00000006</name>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:10:50</nova:creationTime>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:port uuid="88b1b4c6-36ba-46c8-baa2-da5b266af4d1">
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <system>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </system>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <os>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </os>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <features>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </features>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:64:9d:d4'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target dev='tapec2bc9a6-15'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       </target>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </console>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <video>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </video>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.413 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:5b:44 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5b6720-4878-43e8-9823-306ee6c3568e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=88b1b4c6-36ba-46c8-baa2-da5b266af4d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </domain>
Dec 06 10:11:37 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.411 254824 INFO nova.virt.libvirt.driver [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully detached device tap88b1b4c6-36 from instance 467f8e9a-e166-409e-920c-689fea4ea3f6 from the live domain config.
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.412 254824 DEBUG nova.virt.libvirt.vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.412 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.414 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 in datapath af11da89-c29d-4ef1-80d5-4b619757b0ff unbound from our chassis
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.416 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af11da89-c29d-4ef1-80d5-4b619757b0ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.416 254824 DEBUG nova.network.os_vif_util [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.417 254824 DEBUG os_vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.417 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[469db3e3-627d-4107-b4dc-0ade42ee9b0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.418 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff namespace which is not needed anymore
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.420 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.420 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88b1b4c6-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.421 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.424 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.426 254824 INFO os_vif [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.427 254824 DEBUG nova.virt.libvirt.guest [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:37 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:37 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:37 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:37 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:37 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:11:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : haproxy version is 2.8.14-c23fe91
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [NOTICE]   (270175) : path to executable is /usr/sbin/haproxy
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : Exiting Master process...
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : Exiting Master process...
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [ALERT]    (270175) : Current worker (270177) exited with code 143 (Terminated)
Dec 06 10:11:37 compute-0 neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff[270171]: [WARNING]  (270175) : All workers exited. Exiting... (0)
Dec 06 10:11:37 compute-0 systemd[1]: libpod-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope: Deactivated successfully.
Dec 06 10:11:37 compute-0 conmon[270171]: conmon 351c3f74895b352c68f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope/container/memory.events
Dec 06 10:11:37 compute-0 podman[270905]: 2025-12-06 10:11:37.564225098 +0000 UTC m=+0.047604203 container died 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2-userdata-shm.mount: Deactivated successfully.
Dec 06 10:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-65d556cdea8f60788a15893bebf04b8e9b5c638ceed2e80d5a7f1c58c122409c-merged.mount: Deactivated successfully.
Dec 06 10:11:37 compute-0 podman[270905]: 2025-12-06 10:11:37.604156075 +0000 UTC m=+0.087535190 container cleanup 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:11:37 compute-0 systemd[1]: libpod-conmon-351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2.scope: Deactivated successfully.
Dec 06 10:11:37 compute-0 ceph-mon[74327]: pgmap v915: 337 pgs: 337 active+clean; 182 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 14 op/s
Dec 06 10:11:37 compute-0 podman[270958]: 2025-12-06 10:11:37.661569259 +0000 UTC m=+0.037469382 container remove 351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.668 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d0b25a-84a8-4823-9826-40f087f792be]: (4, ('Sat Dec  6 10:11:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff (351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2)\n351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2\nSat Dec  6 10:11:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff (351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2)\n351c3f74895b352c68f68591075c2276eb2709d2dce02e805682d48f4ab285d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.670 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3b4cd8-2e6e-49bc-a7c6-9b6fddd88872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.671 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf11da89-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.673 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 kernel: tapaf11da89-c0: left promiscuous mode
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.676 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.680 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a418272-80e6-493f-8a13-9c6e8bfb89f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 nova_compute[254819]: 2025-12-06 10:11:37.688 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.699 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e88782-f9af-4918-bf0f-f4f30f058333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.700 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0f06d820-4828-43e2-a0af-8a940b9eea82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.719 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[85c3d405-d849-4332-bc83-c6249e5d75bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422084, 'reachable_time': 15380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270991, 'error': None, 'target': 'ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 systemd[1]: run-netns-ovnmeta\x2daf11da89\x2dc29d\x2d4ef1\x2d80d5\x2d4b619757b0ff.mount: Deactivated successfully.
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.722 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af11da89-c29d-4ef1-80d5-4b619757b0ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:11:37 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:37.722 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[56daa704-65fb-4b1c-a8f3-3880788ea376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.780318783 +0000 UTC m=+0.044891720 container create 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:11:37 compute-0 systemd[1]: Started libpod-conmon-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope.
Dec 06 10:11:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.76262976 +0000 UTC m=+0.027202727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.858064141 +0000 UTC m=+0.122637098 container init 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.871602263 +0000 UTC m=+0.136175200 container start 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.874974683 +0000 UTC m=+0.139547640 container attach 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 10:11:37 compute-0 zealous_ride[271008]: 167 167
Dec 06 10:11:37 compute-0 systemd[1]: libpod-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope: Deactivated successfully.
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.878713673 +0000 UTC m=+0.143286630 container died 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-246626891fab1a4b287eadfbe2b353aed297b4caf91ad2cd442c089fcdf33463-merged.mount: Deactivated successfully.
Dec 06 10:11:37 compute-0 podman[270992]: 2025-12-06 10:11:37.920601193 +0000 UTC m=+0.185174130 container remove 864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:11:37 compute-0 systemd[1]: libpod-conmon-864ca532e0eb06706f0fb4310d2ad3d7b588dfc93893f5828b9bfc77a10b5c47.scope: Deactivated successfully.
Dec 06 10:11:38 compute-0 podman[271034]: 2025-12-06 10:11:38.118753969 +0000 UTC m=+0.056313047 container create 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 10:11:38 compute-0 systemd[1]: Started libpod-conmon-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope.
Dec 06 10:11:38 compute-0 podman[271034]: 2025-12-06 10:11:38.09972622 +0000 UTC m=+0.037285328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.193 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.194 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.195 254824 DEBUG nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:11:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:11:38 compute-0 podman[271034]: 2025-12-06 10:11:38.217712033 +0000 UTC m=+0.155271211 container init 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:11:38 compute-0 podman[271034]: 2025-12-06 10:11:38.227840953 +0000 UTC m=+0.165400041 container start 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:11:38 compute-0 podman[271034]: 2025-12-06 10:11:38.233705971 +0000 UTC m=+0.171265089 container attach 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 DEBUG nova.compute.manager [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-deleted-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 INFO nova.compute.manager [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Neutron deleted interface 88b1b4c6-36ba-46c8-baa2-da5b266af4d1; detaching it from the instance and deleting it from the info cache
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.328 254824 DEBUG nova.network.neutron [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.348 254824 DEBUG nova.objects.instance [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'system_metadata' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.368 254824 DEBUG nova.objects.instance [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lazy-loading 'flavor' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.386 254824 DEBUG nova.virt.libvirt.vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.387 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.388 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.393 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.397 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <name>instance-00000006</name>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <system>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </system>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <os>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </os>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <features>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </features>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:64:9d:d4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='tapec2bc9a6-15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </target>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </console>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <video>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </video>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]: </domain>
Dec 06 10:11:38 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.397 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.404 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:5b:44"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap88b1b4c6-36"/></interface>not found in domain: <domain type='kvm' id='4'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <name>instance-00000006</name>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <uuid>467f8e9a-e166-409e-920c-689fea4ea3f6</uuid>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:11:37</nova:creationTime>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <memory unit='KiB'>131072</memory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <vcpu placement='static'>1</vcpu>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <resource>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <partition>/machine</partition>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </resource>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <sysinfo type='smbios'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <system>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='manufacturer'>RDO</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='serial'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='uuid'>467f8e9a-e166-409e-920c-689fea4ea3f6</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <entry name='family'>Virtual Machine</entry>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </system>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <os>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <boot dev='hd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <smbios mode='sysinfo'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </os>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <features>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <vmcoreinfo state='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </features>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <model fallback='forbid'>EPYC-Rome</model>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <vendor>AMD</vendor>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='x2apic'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc-deadline'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='hypervisor'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='tsc_adjust'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='spec-ctrl'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='stibp'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='cmp_legacy'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='overflow-recov'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='succor'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='ibrs'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='amd-ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='virt-ssbd'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='lbrv'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='tsc-scale'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='vmcb-clean'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='flushbyasid'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='pause-filter'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='pfthreshold'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='svme-addr-chk'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='lfence-always-serializing'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='xsaves'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='svm'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='require' name='topoext'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='npt'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <feature policy='disable' name='nrip-save'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <clock offset='utc'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <timer name='hpet' present='no'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_poweroff>destroy</on_poweroff>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_reboot>restart</on_reboot>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <on_crash>destroy</on_crash>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <disk type='network' device='disk'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk' index='2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='vda' bus='virtio'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='virtio-disk0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <disk type='network' device='cdrom'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <auth username='openstack'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <secret type='ceph' uuid='5ecd3f74-dade-5fc4-92ce-8950ae424258'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source protocol='rbd' name='vms/467f8e9a-e166-409e-920c-689fea4ea3f6_disk.config' index='1'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.100' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.102' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <host name='192.168.122.101' port='6789'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </source>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='sda' bus='sata'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <readonly/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='sata0-0-0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pcie.0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='1' port='0x10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='2' port='0x11'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='3' port='0x12'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='4' port='0x13'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='5' port='0x14'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='6' port='0x15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='7' port='0x16'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='8' port='0x17'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.8'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='9' port='0x18'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.9'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='10' port='0x19'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='11' port='0x1a'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.11'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='12' port='0x1b'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.12'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='13' port='0x1c'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.13'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='14' port='0x1d'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.14'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='15' port='0x1e'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='16' port='0x1f'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.16'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='17' port='0x20'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.17'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='18' port='0x21'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.18'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='19' port='0x22'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.19'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='20' port='0x23'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.20'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='21' port='0x24'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.21'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='22' port='0x25'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.22'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='23' port='0x26'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.23'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='24' port='0x27'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.24'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-root-port'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target chassis='25' port='0x28'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.25'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model name='pcie-pci-bridge'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='pci.26'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='usb'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <controller type='sata' index='0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='ide'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </controller>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <interface type='ethernet'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <mac address='fa:16:3e:64:9d:d4'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target dev='tapec2bc9a6-15'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model type='virtio'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <mtu size='1442'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='net0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <serial type='pty'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target type='isa-serial' port='0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:         <model name='isa-serial'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       </target>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <source path='/dev/pts/0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <log file='/var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6/console.log' append='off'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <target type='serial' port='0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='serial0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </console>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='tablet' bus='usb'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='usb' bus='0' port='1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='mouse' bus='ps2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input1'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <input type='keyboard' bus='ps2'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='input2'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </input>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <listen type='address' address='::0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </graphics>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <audio id='1' type='none'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <video>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='video0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </video>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <watchdog model='itco' action='reset'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='watchdog0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </watchdog>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <memballoon model='virtio'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <stats period='10'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='balloon0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <rng model='virtio'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <backend model='random'>/dev/urandom</backend>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <alias name='rng0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <label>system_u:system_r:svirt_t:s0:c464,c770</label>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c464,c770</imagelabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <label>+107:+107</label>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <imagelabel>+107:+107</imagelabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </seclabel>
Dec 06 10:11:38 compute-0 nova_compute[254819]: </domain>
Dec 06 10:11:38 compute-0 nova_compute[254819]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.404 254824 WARNING nova.virt.libvirt.driver [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Detaching interface fa:16:3e:9c:5b:44 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap88b1b4c6-36' not found.
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.405 254824 DEBUG nova.virt.libvirt.vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.406 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converting VIF {"id": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "address": "fa:16:3e:9c:5b:44", "network": {"id": "af11da89-c29d-4ef1-80d5-4b619757b0ff", "bridge": "br-int", "label": "tempest-network-smoke--2039147327", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88b1b4c6-36", "ovs_interfaceid": "88b1b4c6-36ba-46c8-baa2-da5b266af4d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.406 254824 DEBUG nova.network.os_vif_util [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.407 254824 DEBUG os_vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88b1b4c6-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.412 254824 INFO os_vif [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:5b:44,bridge_name='br-int',has_traffic_filtering=True,id=88b1b4c6-36ba-46c8-baa2-da5b266af4d1,network=Network(af11da89-c29d-4ef1-80d5-4b619757b0ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88b1b4c6-36')
Dec 06 10:11:38 compute-0 nova_compute[254819]: 2025-12-06 10:11:38.413 254824 DEBUG nova.virt.libvirt.guest [req-afa703e1-1bb2-44b0-9bf6-6f74289c2e12 req-c550dd0d-a3e3-465a-be15-f0f2e2f41801 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:name>tempest-TestNetworkBasicOps-server-883828898</nova:name>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:creationTime>2025-12-06 10:11:38</nova:creationTime>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:flavor name="m1.nano">
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:memory>128</nova:memory>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:disk>1</nova:disk>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:swap>0</nova:swap>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:vcpus>1</nova:vcpus>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:flavor>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:owner>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   <nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     <nova:port uuid="ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b">
Dec 06 10:11:38 compute-0 nova_compute[254819]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:11:38 compute-0 nova_compute[254819]:     </nova:port>
Dec 06 10:11:38 compute-0 nova_compute[254819]:   </nova:ports>
Dec 06 10:11:38 compute-0 nova_compute[254819]: </nova:instance>
Dec 06 10:11:38 compute-0 nova_compute[254819]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 10:11:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:38 compute-0 lvm[271124]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:11:38 compute-0 lvm[271124]: VG ceph_vg0 finished
Dec 06 10:11:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:11:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:38 compute-0 agitated_bohr[271051]: {}
Dec 06 10:11:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:39.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:39 compute-0 systemd[1]: libpod-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Deactivated successfully.
Dec 06 10:11:39 compute-0 systemd[1]: libpod-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Consumed 1.252s CPU time.
Dec 06 10:11:39 compute-0 podman[271034]: 2025-12-06 10:11:39.037581335 +0000 UTC m=+0.975140413 container died 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 10:11:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef27c0b0bd5570ac9eaf09d6c56335c3056deceea4418feec3a1b3b91e5da4e-merged.mount: Deactivated successfully.
Dec 06 10:11:39 compute-0 podman[271034]: 2025-12-06 10:11:39.080895993 +0000 UTC m=+1.018455081 container remove 0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_bohr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 10:11:39 compute-0 systemd[1]: libpod-conmon-0479e60197c1e85fe1d43b1bf6e1b21510bd5f523a3d1e00cd6a89740f79a27b.scope: Deactivated successfully.
Dec 06 10:11:39 compute-0 sudo[270856]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:11:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:11:39 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:39 compute-0 sudo[271138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:11:39 compute-0 sudo[271138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:39 compute-0 sudo[271138]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:39.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.300 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.301 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 WARNING nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.302 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.303 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.303 254824 DEBUG oslo_concurrency.lockutils [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.304 254824 DEBUG nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:39 compute-0 nova_compute[254819]: 2025-12-06 10:11:39.304 254824 WARNING nova.compute.manager [req-d8f1dba2-a4e7-4c77-9a0c-9f5ee6241c05 req-c665b510-f144-49c2-abd4-433351fc6e1e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-88b1b4c6-36ba-46c8-baa2-da5b266af4d1 for instance with vm_state active and task_state None.
Dec 06 10:11:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:39.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:39 compute-0 ceph-mon[74327]: pgmap v916: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:11:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:11:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:11:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:41 compute-0 nova_compute[254819]: 2025-12-06 10:11:41.089 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:41 compute-0 ceph-mon[74327]: pgmap v917: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.423 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:42 compute-0 podman[271168]: 2025-12-06 10:11:42.526467101 +0000 UTC m=+0.134090575 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:11:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:42 compute-0 sshd-session[271167]: banner exchange: Connection from 3.137.73.221 port 46872: invalid format
Dec 06 10:11:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.775 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.776 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:11:42 compute-0 ovn_controller[152417]: 2025-12-06T10:11:42Z|00080|binding|INFO|Releasing lport 9f6682d5-4069-4017-8320-2e242e2a8f66 from this chassis (sb_readonly=0)
Dec 06 10:11:42 compute-0 nova_compute[254819]: 2025-12-06 10:11:42.960 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 sshd-session[271189]: Connection closed by 3.137.73.221 port 46876
Dec 06 10:11:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.054 254824 INFO nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Port 88b1b4c6-36ba-46c8-baa2-da5b266af4d1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.054 254824 DEBUG nova.network.neutron [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.069 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.089 254824 DEBUG oslo_concurrency.lockutils [None req-b12601a3-3a4c-4af5-af7f-7c124e1fb718 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "interface-467f8e9a-e166-409e-920c-689fea4ea3f6-88b1b4c6-36ba-46c8-baa2-da5b266af4d1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:43 compute-0 sshd-session[271196]: Connection closed by 3.137.73.221 port 46892
Dec 06 10:11:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:11:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.247 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:11:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.323 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.323 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:11:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:43.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.498 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4359MB free_disk=59.942543029785156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.560 254824 DEBUG nova.compute.manager [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.561 254824 DEBUG nova.compute.manager [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing instance network info cache due to event network-changed-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.561 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.562 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.562 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Refreshing network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.587 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.588 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.589 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.671 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.672 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.673 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.674 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.674 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.677 254824 INFO nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Terminating instance
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.679 254824 DEBUG nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:11:43 compute-0 ceph-mon[74327]: pgmap v918: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.9 KiB/s wr, 33 op/s
Dec 06 10:11:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1436422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.713 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:11:43 compute-0 kernel: tapec2bc9a6-15 (unregistering): left promiscuous mode
Dec 06 10:11:43 compute-0 NetworkManager[48882]: <info>  [1765015903.7467] device (tapec2bc9a6-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00081|binding|INFO|Releasing lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b from this chassis (sb_readonly=0)
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00082|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b down in Southbound
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00083|binding|INFO|Removing iface tapec2bc9a6-15 ovn-installed in OVS
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.751 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.766 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.768 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.769 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.771 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[558d4aed-f3b8-4641-b62c-887275f749bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.772 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd namespace which is not needed anymore
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.780 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 06 10:11:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 17.785s CPU time.
Dec 06 10:11:43 compute-0 systemd-machined[216202]: Machine qemu-4-instance-00000006 terminated.
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.851 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:43 compute-0 kernel: tapec2bc9a6-15: entered promiscuous mode
Dec 06 10:11:43 compute-0 NetworkManager[48882]: <info>  [1765015903.9033] manager: (tapec2bc9a6-15): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.903 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00084|binding|INFO|Claiming lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for this chassis.
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00085|binding|INFO|ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b: Claiming fa:16:3e:64:9d:d4 10.100.0.14
Dec 06 10:11:43 compute-0 systemd-udevd[271228]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:11:43 compute-0 kernel: tapec2bc9a6-15 (unregistering): left promiscuous mode
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.915 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00086|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b ovn-installed in OVS
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00087|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b up in Southbound
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.926 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.930 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00088|binding|INFO|Releasing lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b from this chassis (sb_readonly=0)
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00089|binding|INFO|Setting lport ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b down in Southbound
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : haproxy version is 2.8.14-c23fe91
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [NOTICE]   (267815) : path to executable is /usr/sbin/haproxy
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : Exiting Master process...
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : Exiting Master process...
Dec 06 10:11:43 compute-0 ovn_controller[152417]: 2025-12-06T10:11:43Z|00090|binding|INFO|Removing iface tapec2bc9a6-15 ovn-installed in OVS
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [ALERT]    (267815) : Current worker (267817) exited with code 143 (Terminated)
Dec 06 10:11:43 compute-0 neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd[267811]: [WARNING]  (267815) : All workers exited. Exiting... (0)
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.936 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:43.939 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:9d:d4 10.100.0.14'], port_security=['fa:16:3e:64:9d:d4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '467f8e9a-e166-409e-920c-689fea4ea3f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04450372-2efd-4ce5-88c7-781d38bca802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=25f33b62-e011-4e1d-9dc2-7927e4f8e59b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:11:43 compute-0 systemd[1]: libpod-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope: Deactivated successfully.
Dec 06 10:11:43 compute-0 podman[271262]: 2025-12-06 10:11:43.946133373 +0000 UTC m=+0.056472190 container died 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.954 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.957 254824 INFO nova.virt.libvirt.driver [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance destroyed successfully.
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.957 254824 DEBUG nova.objects.instance [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.975 254824 DEBUG nova.virt.libvirt.vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:10:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-883828898',display_name='tempest-TestNetworkBasicOps-server-883828898',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-883828898',id=6,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBavG4AKWHlfpiq0SQasTveyxdMuqwUIBzXgDHnQ7us03WRPTjmnHIL9KdumxPOuSQ7mS9TjZaDU1Z0fZMB9bCP4vMT4dbs0/4ZtyRDMtJHhAJtsWO/6Dg3g/pdboWhC+A==',key_name='tempest-TestNetworkBasicOps-875879575',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-qxktas63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:10:20Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=467f8e9a-e166-409e-920c-689fea4ea3f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.976 254824 DEBUG nova.network.os_vif_util [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.977 254824 DEBUG nova.network.os_vif_util [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.978 254824 DEBUG os_vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.980 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c-userdata-shm.mount: Deactivated successfully.
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.980 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec2bc9a6-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0c35a0a906865a4663842e5ed6b698da4d1040e57a2b60288990c137c9d3376-merged.mount: Deactivated successfully.
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:11:43 compute-0 podman[271262]: 2025-12-06 10:11:43.991132546 +0000 UTC m=+0.101471363 container cleanup 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:11:43 compute-0 nova_compute[254819]: 2025-12-06 10:11:43.991 254824 INFO os_vif [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:9d:d4,bridge_name='br-int',has_traffic_filtering=True,id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b,network=Network(4d76af3c-ede9-445b-bea0-ba96a2eaeddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bc9a6-15')
Dec 06 10:11:44 compute-0 systemd[1]: libpod-conmon-64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c.scope: Deactivated successfully.
Dec 06 10:11:44 compute-0 podman[271302]: 2025-12-06 10:11:44.072424228 +0000 UTC m=+0.054433106 container remove 64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.087 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca67610-5b10-42c7-aeb6-b352b159fbaa]: (4, ('Sat Dec  6 10:11:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd (64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c)\n64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c\nSat Dec  6 10:11:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd (64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c)\n64301eb34db4547a67ae0f8dfcc1faa503a5e4977d9bb18dc1381f6eb172dd7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.089 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[63c0082e-72f9-4441-9306-145423ddf235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.090 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d76af3c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:11:44 compute-0 kernel: tap4d76af3c-e0: left promiscuous mode
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.092 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.116 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b872c5e4-332e-4697-8c94-e4fc807be9f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.130 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea3dbdc-98f1-4c81-b74f-9002ec2e8609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.131 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab6844b-32d4-4e09-b2e0-212f1bed689a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.161 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[df22c8c2-9cce-41b2-9207-e9664a0adb9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419047, 'reachable_time': 15621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271328, 'error': None, 'target': 'ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.164 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d76af3c-ede9-445b-bea0-ba96a2eaeddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.164 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[4435c0b6-253d-4794-907d-d8f0b626f421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.165 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis
Dec 06 10:11:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d76af3c\x2dede9\x2d445b\x2dbea0\x2dba96a2eaeddd.mount: Deactivated successfully.
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.166 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.167 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e4aeb864-2171-4369-8078-c22cfd32d552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.168 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b in datapath 4d76af3c-ede9-445b-bea0-ba96a2eaeddd unbound from our chassis
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.168 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d76af3c-ede9-445b-bea0-ba96a2eaeddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:11:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:44.169 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7a8c08-cd30-4cd7-83a5-8237d834c15e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:11:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:11:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230595324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.233 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.243 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.264 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.267 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.268 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.427 254824 INFO nova.virt.libvirt.driver [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deleting instance files /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6_del
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.428 254824 INFO nova.virt.libvirt.driver [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deletion of /var/lib/nova/instances/467f8e9a-e166-409e-920c-689fea4ea3f6_del complete
Dec 06 10:11:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 3.0 KiB/s wr, 19 op/s
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.504 254824 INFO nova.compute.manager [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 0.82 seconds to destroy the instance on the hypervisor.
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG oslo.service.loopingcall [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:11:44 compute-0 nova_compute[254819]: 2025-12-06 10:11:44.505 254824 DEBUG nova.network.neutron [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:11:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc002d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3230595324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:45.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.270 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.271 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.272 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.455 254824 DEBUG nova.network.neutron [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.480 254824 INFO nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Took 0.98 seconds to deallocate network for instance.
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.488 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated VIF entry in instance network info cache for port ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.489 254824 DEBUG nova.network.neutron [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [{"id": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "address": "fa:16:3e:64:9d:d4", "network": {"id": "4d76af3c-ede9-445b-bea0-ba96a2eaeddd", "bridge": "br-int", "label": "tempest-network-smoke--1753144487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bc9a6-15", "ovs_interfaceid": "ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.540 254824 DEBUG oslo_concurrency.lockutils [req-2c6fb01d-28eb-4026-bcb4-4fcd51c9ff56 req-e45e0ba2-3975-4d7d-8d08-dd4e7cba44ce d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.551 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.552 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.565 254824 DEBUG nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-deleted-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.566 254824 INFO nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Neutron deleted interface ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b; detaching it from the instance and deleting it from the info cache
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.566 254824 DEBUG nova.network.neutron [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.596 254824 DEBUG nova.compute.manager [req-027256ff-1fdc-423d-8bdc-89230f21652f req-5485ea86-6d04-4cbe-8fa3-3612b7311a2a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Detach interface failed, port_id=ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b, reason: Instance 467f8e9a-e166-409e-920c-689fea4ea3f6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.609 254824 DEBUG oslo_concurrency.processutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.696 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.697 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.698 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.699 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.699 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.700 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.701 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.702 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.703 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.703 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.704 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.705 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.706 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.707 254824 DEBUG oslo_concurrency.lockutils [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.707 254824 DEBUG nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.708 254824 WARNING nova.compute.manager [req-92d89134-8424-4158-b02a-015d1026046d req-5e6a8033-8fad-4359-9532-cdeadbbc80a2 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-unplugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:45 compute-0 ceph-mon[74327]: pgmap v919: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 3.0 KiB/s wr, 19 op/s
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:11:45 compute-0 nova_compute[254819]: 2025-12-06 10:11:45.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.093 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:11:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501168822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.156 254824 DEBUG oslo_concurrency.processutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.165 254824 DEBUG nova.compute.provider_tree [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:11:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:11:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.185 254824 DEBUG nova.scheduler.client.report [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:11:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:11:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.226 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.261 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.262 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 467f8e9a-e166-409e-920c-689fea4ea3f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.282 254824 INFO nova.scheduler.client.report [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 467f8e9a-e166-409e-920c-689fea4ea3f6
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.373 254824 DEBUG oslo_concurrency.lockutils [None req-cf8327f4-d6f0-4585-9751-4d85e5e2283c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 KiB/s wr, 16 op/s
Dec 06 10:11:46 compute-0 nova_compute[254819]: 2025-12-06 10:11:46.469 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:11:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/343641707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2501168822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:11:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3014585911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:11:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.610 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.627 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-467f8e9a-e166-409e-920c-689fea4ea3f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.628 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 10:11:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 10:11:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 10:11:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:47.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.004000106s ======
Dec 06 10:11:47 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000106s
Dec 06 10:11:47 compute-0 podman[271357]: 2025-12-06 10:11:47.706651817 +0000 UTC m=+0.048661481 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.847 254824 DEBUG oslo_concurrency.lockutils [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "467f8e9a-e166-409e-920c-689fea4ea3f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.848 254824 DEBUG nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] No waiting events found dispatching network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:11:47 compute-0 nova_compute[254819]: 2025-12-06 10:11:47.848 254824 WARNING nova.compute.manager [req-ef2c6c0e-50fb-465a-bef3-0b72304ac37c req-39faf58d-07b8-4396-b68c-f44f761e8bab d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Received unexpected event network-vif-plugged-ec2bc9a6-1578-4c92-b8c9-4d286a1a6f4b for instance with vm_state deleted and task_state None.
Dec 06 10:11:47 compute-0 ceph-mon[74327]: pgmap v920: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 KiB/s wr, 16 op/s
Dec 06 10:11:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1506613304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.7 KiB/s wr, 44 op/s
Dec 06 10:11:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:48 compute-0 nova_compute[254819]: 2025-12-06 10:11:48.656 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:48 compute-0 nova_compute[254819]: 2025-12-06 10:11:48.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:48 compute-0 nova_compute[254819]: 2025-12-06 10:11:48.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:11:48 compute-0 nova_compute[254819]: 2025-12-06 10:11:48.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:49.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 10:11:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:49 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:49 compute-0 ceph-mon[74327]: pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.7 KiB/s wr, 44 op/s
Dec 06 10:11:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/468755846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1105957875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:11:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:11:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:11:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:51 compute-0 nova_compute[254819]: 2025-12-06 10:11:51.094 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:51 compute-0 nova_compute[254819]: 2025-12-06 10:11:51.272 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:51 compute-0 nova_compute[254819]: 2025-12-06 10:11:51.377 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e65a15d0 =====
Dec 06 10:11:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e65a15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:51 compute-0 radosgw[94308]: beast: 0x7f53e65a15d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:51 compute-0 ceph-mon[74327]: pgmap v922: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:53 compute-0 sudo[271386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:11:53 compute-0 sudo[271386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:11:53 compute-0 sudo[271386]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:11:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:53.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:11:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:11:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:53 compute-0 ceph-mon[74327]: pgmap v923: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:53 compute-0 nova_compute[254819]: 2025-12-06 10:11:53.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:11:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:11:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:11:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:11:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:11:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:55.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:56 compute-0 ceph-mon[74327]: pgmap v924: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:11:56 compute-0 nova_compute[254819]: 2025-12-06 10:11:56.127 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:57 compute-0 ceph-mon[74327]: pgmap v925: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:11:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:57.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:11:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:58 compute-0 nova_compute[254819]: 2025-12-06 10:11:58.944 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015903.9427285, 467f8e9a-e166-409e-920c-689fea4ea3f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:11:58 compute-0 nova_compute[254819]: 2025-12-06 10:11:58.945 254824 INFO nova.compute.manager [-] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] VM Stopped (Lifecycle Event)
Dec 06 10:11:58 compute-0 nova_compute[254819]: 2025-12-06 10:11:58.971 254824 DEBUG nova.compute.manager [None req-40ed7fed-b130-48b0-af51-6aaa006778d1 - - - - - -] [instance: 467f8e9a-e166-409e-920c-689fea4ea3f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:11:58 compute-0 nova_compute[254819]: 2025-12-06 10:11:58.989 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:11:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:11:59.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:11:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:11:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:11:59 compute-0 ceph-mon[74327]: pgmap v926: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:11:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:11:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:11:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:11:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:11:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:11:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:11:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:11:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:12:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 06 10:12:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:12:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:12:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:01 compute-0 nova_compute[254819]: 2025-12-06 10:12:01.129 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:01 compute-0 ceph-mon[74327]: pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:12:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:01.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:12:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:03 compute-0 ceph-mon[74327]: pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:12:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:03.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:03 compute-0 nova_compute[254819]: 2025-12-06 10:12:03.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:05 compute-0 ceph-mon[74327]: pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:06 compute-0 nova_compute[254819]: 2025-12-06 10:12:06.132 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101207 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:12:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:07 compute-0 podman[271429]: 2025-12-06 10:12:07.437125669 +0000 UTC m=+0.062777359 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:12:07 compute-0 ceph-mon[74327]: pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:07.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:07.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.865 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.865 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.888 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.958 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.959 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.966 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:12:07 compute-0 nova_compute[254819]: 2025-12-06 10:12:07.967 254824 INFO nova.compute.claims [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.074 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628881591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.531 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.538 254824 DEBUG nova.compute.provider_tree [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.557 254824 DEBUG nova.scheduler.client.report [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.588 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.589 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:12:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:08 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2628881591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.667 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.668 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.686 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:12:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.704 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.800 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.802 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.803 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating image(s)
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.835 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.866 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.891 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.895 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.961 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.962 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.963 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.964 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:12:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.990 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:08 compute-0 nova_compute[254819]: 2025-12-06 10:12:08.994 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:09.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.020 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.272 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.367 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.494 254824 DEBUG nova.objects.instance [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.517 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.518 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Ensure instance console log exists: /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.519 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.519 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.520 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:09 compute-0 nova_compute[254819]: 2025-12-06 10:12:09.529 254824 DEBUG nova.policy [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:12:09 compute-0 ceph-mon[74327]: pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:09.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:12:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:12:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.135 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:11.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:11 compute-0 ceph-mon[74327]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.703 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Successfully updated port: 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.719 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.719 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.720 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.824 254824 DEBUG nova.compute.manager [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.825 254824 DEBUG nova.compute.manager [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:12:11 compute-0 nova_compute[254819]: 2025-12-06 10:12:11.825 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:12:12 compute-0 nova_compute[254819]: 2025-12-06 10:12:12.294 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:12:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:13 compute-0 sudo[271644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:12:13 compute-0 sudo[271644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:13 compute-0 sudo[271644]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:13 compute-0 podman[271668]: 2025-12-06 10:12:13.337432292 +0000 UTC m=+0.122420802 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.545 254824 DEBUG nova.network.neutron [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance network_info: |[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.567 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.570 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start _get_guest_xml network_info=[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.575 254824 WARNING nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.588 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.589 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.594 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.libvirt.host [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.595 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.596 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.597 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.598 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.599 254824 DEBUG nova.virt.hardware [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:12:13 compute-0 nova_compute[254819]: 2025-12-06 10:12:13.603 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:13.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:13 compute-0 ceph-mon[74327]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Dec 06 10:12:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:12:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421545061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.022 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.032 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.060 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.064 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 183 op/s
Dec 06 10:12:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:12:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184900673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.583 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.585 254824 DEBUG nova.virt.libvirt.vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:08Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.585 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.586 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.588 254824 DEBUG nova.objects.instance [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.652 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <uuid>38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</uuid>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <name>instance-00000008</name>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-284526286</nova:name>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:12:13</nova:creationTime>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <nova:port uuid="4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e">
Dec 06 10:12:14 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <system>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="serial">38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="uuid">38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </system>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <os>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </os>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <features>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </features>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk">
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </source>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config">
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </source>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:12:14 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:6f:25:fa"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <target dev="tap4c8ce68f-8a"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/console.log" append="off"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <video>
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </video>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:12:14 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:12:14 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:12:14 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:12:14 compute-0 nova_compute[254819]: </domain>
Dec 06 10:12:14 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.653 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Preparing to wait for external event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.654 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.655 254824 DEBUG nova.virt.libvirt.vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:08Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.655 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.656 254824 DEBUG nova.network.os_vif_util [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.656 254824 DEBUG os_vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.657 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.658 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.658 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.660 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.661 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c8ce68f-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.661 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c8ce68f-8a, col_values=(('external_ids', {'iface-id': '4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:25:fa', 'vm-uuid': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.663 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:14 compute-0 NetworkManager[48882]: <info>  [1765015934.6644] manager: (tap4c8ce68f-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.666 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.673 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.675 254824 INFO os_vif [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')
Dec 06 10:12:14 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1421545061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:14 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1184900673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.730 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6f:25:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.731 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Using config drive
Dec 06 10:12:14 compute-0 nova_compute[254819]: 2025-12-06 10:12:14.764 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608002030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.125 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Creating config drive at /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.134 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99hdhhok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.270 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99hdhhok" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.317 254824 DEBUG nova.storage.rbd_utils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.323 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.342 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.343 254824 DEBUG nova.network.neutron [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.367 254824 DEBUG oslo_concurrency.lockutils [req-c488b580-df7f-43bf-a095-bd121577d26c req-2a8b9b3f-644f-41fa-a808-31d03a16e7cd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.489 254824 DEBUG oslo_concurrency.processutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.490 254824 INFO nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deleting local config drive /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971/disk.config because it was imported into RBD.
Dec 06 10:12:15 compute-0 kernel: tap4c8ce68f-8a: entered promiscuous mode
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.5435] manager: (tap4c8ce68f-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.544 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 ovn_controller[152417]: 2025-12-06T10:12:15Z|00091|binding|INFO|Claiming lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for this chassis.
Dec 06 10:12:15 compute-0 ovn_controller[152417]: 2025-12-06T10:12:15Z|00092|binding|INFO|4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e: Claiming fa:16:3e:6f:25:fa 10.100.0.9
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.552 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.556 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.563 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.564 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 bound to our chassis
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.565 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:15 compute-0 systemd-udevd[271832]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.580 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[66e4b8b3-c4b5-4f04-857d-2e507f53e082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.581 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2ce21d9-e1 in ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:12:15 compute-0 systemd-machined[216202]: New machine qemu-5-instance-00000008.
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.584 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2ce21d9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.584 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2266fc6d-ec4e-4c4a-a2be-1c19054f4676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.585 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1987f26b-56df-4499-b2ca-0548f19f513e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.5890] device (tap4c8ce68f-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.5902] device (tap4c8ce68f-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.598 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[9e153dca-6d72-4286-8cf1-889391f90fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.630 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4f94dbe1-c335-4326-9fa7-418f39ea4cdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.641 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 ovn_controller[152417]: 2025-12-06T10:12:15Z|00093|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e ovn-installed in OVS
Dec 06 10:12:15 compute-0 ovn_controller[152417]: 2025-12-06T10:12:15Z|00094|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e up in Southbound
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:15.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.667 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7e475-d870-4391-847c-b37e7bbf348b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.673 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[77e04807-2608-46d9-80e4-d015d27e2974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 systemd-udevd[271835]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.6755] manager: (tapc2ce21d9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Dec 06 10:12:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.706 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb7682a-cc8a-4892-a74f-b11382759a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.709 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[39088c1e-b4ec-4144-910b-6818ef8fb60a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ceph-mon[74327]: pgmap v934: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 183 op/s
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.7287] device (tapc2ce21d9-e0): carrier: link connected
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.734 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[13b343fc-aef4-4916-8bff-2d1147986895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.752 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8d05b2a7-d7fa-43a2-8c6c-efde55e15fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430621, 'reachable_time': 21540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271867, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.766 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0237e422-6e19-4289-a5a2-a6dd70db8272]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:5864'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 430621, 'tstamp': 430621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271868, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.786 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[24750187-e1f5-481e-a03f-867a53145d86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430621, 'reachable_time': 21540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271869, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.815 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60eae7-c67d-428a-b226-1d9d184e03b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.881 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f11432ba-74d5-400a-bd20-196274539ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.883 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ce21d9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:15 compute-0 NetworkManager[48882]: <info>  [1765015935.8868] manager: (tapc2ce21d9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec 06 10:12:15 compute-0 kernel: tapc2ce21d9-e0: entered promiscuous mode
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.890 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2ce21d9-e0, col_values=(('external_ids', {'iface-id': '52d33d15-d96f-4c26-a63e-0415fca27e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:15 compute-0 ovn_controller[152417]: 2025-12-06T10:12:15Z|00095|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.892 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.914 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.915 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.916 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8f05fd07-d5c7-4fb1-b5ea-4e2fdfdf43d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.917 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:12:15 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:15.917 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'env', 'PROCESS_TAG=haproxy-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2ce21d9-e711-470f-89f6-0db58ded70b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG nova.compute.manager [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.953 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.954 254824 DEBUG oslo_concurrency.lockutils [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:15 compute-0 nova_compute[254819]: 2025-12-06 10:12:15.954 254824 DEBUG nova.compute.manager [req-226530a5-8c8e-474c-97fc-3f170d512b65 req-e6517a8d-35ba-4281-89f0-e8f812fa2956 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Processing event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.137 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.221 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2212744, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.222 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Started (Lifecycle Event)
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.224 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.227 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.230 254824 INFO nova.virt.libvirt.driver [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance spawned successfully.
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.230 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.258 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.262 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.262 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.263 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.264 254824 DEBUG nova.virt.libvirt.driver [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.267 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.308 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.308 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2214634, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.309 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Paused (Lifecycle Event)
Dec 06 10:12:16 compute-0 podman[271943]: 2025-12-06 10:12:16.317760144 +0000 UTC m=+0.056574812 container create 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.346 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.351 254824 INFO nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 7.55 seconds to spawn the instance on the hypervisor.
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.351 254824 DEBUG nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.355 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015936.2263675, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.356 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Resumed (Lifecycle Event)
Dec 06 10:12:16 compute-0 systemd[1]: Started libpod-conmon-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope.
Dec 06 10:12:16 compute-0 sshd-session[271427]: Connection closed by 3.137.73.221 port 47866 [preauth]
Dec 06 10:12:16 compute-0 podman[271943]: 2025-12-06 10:12:16.286990812 +0000 UTC m=+0.025805520 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:12:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d1a25401d5ac4822de4fb50bc3620447da04f31525bd103aac8567c3654c9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:16 compute-0 podman[271943]: 2025-12-06 10:12:16.416211935 +0000 UTC m=+0.155026623 container init 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.423 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:16 compute-0 podman[271943]: 2025-12-06 10:12:16.42346043 +0000 UTC m=+0.162275098 container start 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.427 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:12:16 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : New worker (271964) forked
Dec 06 10:12:16 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : Loading success.
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.456 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:12:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.693 254824 INFO nova.compute.manager [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 8.76 seconds to build instance.
Dec 06 10:12:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:16 compute-0 nova_compute[254819]: 2025-12-06 10:12:16.725 254824 DEBUG oslo_concurrency.lockutils [None req-f695b3bd-187c-4c70-9cae-541f02555ed2 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:17.641Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:12:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:17.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:17.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:17 compute-0 ceph-mon[74327]: pgmap v935: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.055 254824 DEBUG nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.056 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.056 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.057 254824 DEBUG oslo_concurrency.lockutils [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.057 254824 DEBUG nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:18 compute-0 nova_compute[254819]: 2025-12-06 10:12:18.058 254824 WARNING nova.compute.manager [req-e5cc6175-653e-4dcc-a8f5-072b895264c4 req-9ed28b3c-e90c-409e-89cb-56788b408daa d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state None.
Dec 06 10:12:18 compute-0 podman[271975]: 2025-12-06 10:12:18.428531428 +0000 UTC m=+0.056596514 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 10:12:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:19.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:19 compute-0 nova_compute[254819]: 2025-12-06 10:12:19.664 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:12:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:12:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:19 compute-0 ceph-mon[74327]: pgmap v936: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:12:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:12:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:21 compute-0 nova_compute[254819]: 2025-12-06 10:12:21.139 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:21 compute-0 ovn_controller[152417]: 2025-12-06T10:12:21Z|00096|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec 06 10:12:21 compute-0 nova_compute[254819]: 2025-12-06 10:12:21.371 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:21 compute-0 NetworkManager[48882]: <info>  [1765015941.3755] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec 06 10:12:21 compute-0 NetworkManager[48882]: <info>  [1765015941.3765] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec 06 10:12:21 compute-0 ovn_controller[152417]: 2025-12-06T10:12:21Z|00097|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec 06 10:12:21 compute-0 nova_compute[254819]: 2025-12-06 10:12:21.438 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:21 compute-0 nova_compute[254819]: 2025-12-06 10:12:21.447 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:21.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:21 compute-0 ceph-mon[74327]: pgmap v937: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.703 254824 DEBUG nova.compute.manager [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG nova.compute.manager [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.704 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:12:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.918 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.918 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.919 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.920 254824 INFO nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Terminating instance
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.922 254824 DEBUG nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:12:22 compute-0 kernel: tap4c8ce68f-8a (unregistering): left promiscuous mode
Dec 06 10:12:22 compute-0 NetworkManager[48882]: <info>  [1765015942.9617] device (tap4c8ce68f-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.975 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:22 compute-0 ovn_controller[152417]: 2025-12-06T10:12:22Z|00098|binding|INFO|Releasing lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e from this chassis (sb_readonly=0)
Dec 06 10:12:22 compute-0 ovn_controller[152417]: 2025-12-06T10:12:22Z|00099|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e down in Southbound
Dec 06 10:12:22 compute-0 ovn_controller[152417]: 2025-12-06T10:12:22Z|00100|binding|INFO|Removing iface tap4c8ce68f-8a ovn-installed in OVS
Dec 06 10:12:22 compute-0 nova_compute[254819]: 2025-12-06 10:12:22.978 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:22 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.987 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:12:22 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.988 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 unbound from our chassis
Dec 06 10:12:22 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.990 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2ce21d9-e711-470f-89f6-0db58ded70b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:12:22 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.991 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf214a5-8e2f-49a7-83c0-d03e22d810f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:22 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:22.991 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace which is not needed anymore
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.004 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 06 10:12:23 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 7.556s CPU time.
Dec 06 10:12:23 compute-0 systemd-machined[216202]: Machine qemu-5-instance-00000008 terminated.
Dec 06 10:12:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : haproxy version is 2.8.14-c23fe91
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [NOTICE]   (271962) : path to executable is /usr/sbin/haproxy
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : Exiting Master process...
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : Exiting Master process...
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [ALERT]    (271962) : Current worker (271964) exited with code 143 (Terminated)
Dec 06 10:12:23 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[271958]: [WARNING]  (271962) : All workers exited. Exiting... (0)
Dec 06 10:12:23 compute-0 systemd[1]: libpod-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope: Deactivated successfully.
Dec 06 10:12:23 compute-0 podman[272024]: 2025-12-06 10:12:23.130858233 +0000 UTC m=+0.044018557 container died 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.155 254824 INFO nova.virt.libvirt.driver [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Instance destroyed successfully.
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.156 254824 DEBUG nova.objects.instance [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24-userdata-shm.mount: Deactivated successfully.
Dec 06 10:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d1a25401d5ac4822de4fb50bc3620447da04f31525bd103aac8567c3654c9e-merged.mount: Deactivated successfully.
Dec 06 10:12:23 compute-0 podman[272024]: 2025-12-06 10:12:23.175037324 +0000 UTC m=+0.088197658 container cleanup 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.182 254824 DEBUG nova.virt.libvirt.vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:12:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-284526286',display_name='tempest-TestNetworkBasicOps-server-284526286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-284526286',id=8,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5WKiUV8xkMkAsnSbmzedlPzfsh0aXQ19j5QoS/ZDmv+Vks7yaRYH6rFdpbJ+HzL9PhlMkojs6PG37wLmd0XymAGnK31KjajjkwaxDm0frZ4gN7dvsIumy7dBgoLu6Aiw==',key_name='tempest-TestNetworkBasicOps-1751669676',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-4hshqkm6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:12:16Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.182 254824 DEBUG nova.network.os_vif_util [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.183 254824 DEBUG nova.network.os_vif_util [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.183 254824 DEBUG os_vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.185 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.185 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c8ce68f-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.187 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.188 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.191 254824 INFO os_vif [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')
Dec 06 10:12:23 compute-0 systemd[1]: libpod-conmon-97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24.scope: Deactivated successfully.
Dec 06 10:12:23 compute-0 podman[272063]: 2025-12-06 10:12:23.244372217 +0000 UTC m=+0.042857276 container remove 97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.251 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[62e5d06d-739a-4886-a56d-9c38e551312d]: (4, ('Sat Dec  6 10:12:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24)\n97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24\nSat Dec  6 10:12:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24)\n97ddbd51aee0b14d02334bcb69777ed59598f44b021709ab3326fb5492771b24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.253 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[05c73cea-9d36-4475-b186-bb73f6f1b33d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.254 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:23 compute-0 kernel: tapc2ce21d9-e0: left promiscuous mode
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.256 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.268 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.272 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b87d7559-5820-49f5-8dfc-d1473cba12d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.292 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cddc806a-ec8d-41dd-b2fe-fe4de853bf4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.295 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3b97e147-07e4-4f85-b9c1-2b1f88844b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.308 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[156285ea-ecbd-46c0-ac8d-51b1eaec11b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 430614, 'reachable_time': 25478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272096, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.311 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:12:23 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:23.311 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[d053615f-5fdb-4089-9f15-4df63481fb7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:23 compute-0 systemd[1]: run-netns-ovnmeta\x2dc2ce21d9\x2de711\x2d470f\x2d89f6\x2d0db58ded70b9.mount: Deactivated successfully.
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.549 254824 INFO nova.virt.libvirt.driver [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deleting instance files /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_del
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.551 254824 INFO nova.virt.libvirt.driver [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deletion of /var/lib/nova/instances/38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971_del complete
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.610 254824 INFO nova.compute.manager [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 0.69 seconds to destroy the instance on the hypervisor.
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG oslo.service.loopingcall [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:12:23 compute-0 nova_compute[254819]: 2025-12-06 10:12:23.611 254824 DEBUG nova.network.neutron [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:12:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:23.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:23 compute-0 ceph-mon[74327]: pgmap v938: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 10:12:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:12:23
Dec 06 10:12:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:12:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:12:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', 'vms', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.mgr']
Dec 06 10:12:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:12:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:12:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.511 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.512 254824 DEBUG nova.network.neutron [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:12:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.540 254824 DEBUG oslo_concurrency.lockutils [req-f1e3cb92-e040-4d63-ac0b-ae859b8b6058 req-c0b96a0c-f580-4692-9d47-950fb602745b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:12:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.823 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.824 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.825 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.826 254824 DEBUG oslo_concurrency.lockutils [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.827 254824 DEBUG nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:24 compute-0 nova_compute[254819]: 2025-12-06 10:12:24.827 254824 WARNING nova.compute.manager [req-218a54f4-c457-4a69-a036-7fe0267da5ff req-53f3d048-405e-4f16-8420-85b46c210569 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state deleting.
Dec 06 10:12:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:25 compute-0 nova_compute[254819]: 2025-12-06 10:12:25.488 254824 DEBUG nova.network.neutron [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:25 compute-0 nova_compute[254819]: 2025-12-06 10:12:25.507 254824 INFO nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Took 1.90 seconds to deallocate network for instance.
Dec 06 10:12:25 compute-0 nova_compute[254819]: 2025-12-06 10:12:25.556 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:25 compute-0 nova_compute[254819]: 2025-12-06 10:12:25.556 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:25 compute-0 nova_compute[254819]: 2025-12-06 10:12:25.620 254824 DEBUG oslo_concurrency.processutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:25.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:25 compute-0 ceph-mon[74327]: pgmap v939: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 06 10:12:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453190227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.074 254824 DEBUG oslo_concurrency.processutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.081 254824 DEBUG nova.compute.provider_tree [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.101 254824 DEBUG nova.scheduler.client.report [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.142 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.145 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.173 254824 INFO nova.scheduler.client.report [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971
Dec 06 10:12:26 compute-0 nova_compute[254819]: 2025-12-06 10:12:26.260 254824 DEBUG oslo_concurrency.lockutils [None req-ee600647-5941-47c6-be62-219da0f84046 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 06 10:12:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/453190227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:27.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:27.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:27.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:27 compute-0 ceph-mon[74327]: pgmap v940: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Dec 06 10:12:28 compute-0 nova_compute[254819]: 2025-12-06 10:12:28.190 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec 06 10:12:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65dc003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:29.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101229 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:12:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:29.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:29.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:29 compute-0 ceph-mon[74327]: pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec 06 10:12:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6608004350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:12:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:12:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:31 compute-0 nova_compute[254819]: 2025-12-06 10:12:31.144 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:31.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:31.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:31 compute-0 ceph-mon[74327]: pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00012c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:33 compute-0 nova_compute[254819]: 2025-12-06 10:12:33.193 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:33 compute-0 sudo[272132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:12:33 compute-0 sudo[272132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:33 compute-0 sudo[272132]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:33.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:33 compute-0 ceph-mon[74327]: pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.597 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.598 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.622 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:12:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.728 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.736 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.736 254824 INFO nova.compute.claims [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:12:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:34 compute-0 nova_compute[254819]: 2025-12-06 10:12:34.835 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575946514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.282 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.289 254824 DEBUG nova.compute.provider_tree [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.306 254824 DEBUG nova.scheduler.client.report [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.342 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.344 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.428 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.428 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.451 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.473 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.598 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.600 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.601 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating image(s)
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.639 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.677 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:35.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.710 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.715 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.746 254824 DEBUG nova.policy [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.804 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.805 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.806 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.806 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.839 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:35 compute-0 nova_compute[254819]: 2025-12-06 10:12:35.844 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 588b3b1f-9845-438c-89c4-744f95204b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:35 compute-0 ceph-mon[74327]: pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 06 10:12:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1575946514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.134 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 588b3b1f-9845-438c-89c4-744f95204b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.176 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.229 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.368 254824 DEBUG nova.objects.instance [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Ensure instance console log exists: /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:36 compute-0 nova_compute[254819]: 2025-12-06 10:12:36.385 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:12:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.596 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Successfully updated port: 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.616 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:12:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:37.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.708 254824 DEBUG nova.compute.manager [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.709 254824 DEBUG nova.compute.manager [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Refreshing instance network info cache due to event network-changed-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.709 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:12:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:37.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:37 compute-0 nova_compute[254819]: 2025-12-06 10:12:37.759 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:12:37 compute-0 ceph-mon[74327]: pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.154 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015943.1529648, 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.155 254824 INFO nova.compute.manager [-] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] VM Stopped (Lifecycle Event)
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.187 254824 DEBUG nova.compute.manager [None req-d52641f1-c4f0-4c75-bde4-5e021ca08454 - - - - - -] [instance: 38ba6c3c-d73d-40c6-ac54-2ec1d3f0b971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.197 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:38 compute-0 podman[272352]: 2025-12-06 10:12:38.454443433 +0000 UTC m=+0.081524840 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 10:12:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 10:12:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.771 254824 DEBUG nova.network.neutron [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.789 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.789 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance network_info: |[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.791 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.791 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Refreshing network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.795 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start _get_guest_xml network_info=[{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.800 254824 WARNING nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.807 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.808 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.816 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.816 254824 DEBUG nova.virt.libvirt.host [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.817 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.817 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.818 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.819 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.820 254824 DEBUG nova.virt.hardware [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:12:38 compute-0 nova_compute[254819]: 2025-12-06 10:12:38.825 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:12:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:39.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:12:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480538889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.340 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.373 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.377 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:39 compute-0 sudo[272415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:12:39 compute-0 sudo[272415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:39 compute-0 sudo[272415]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:39 compute-0 sudo[272459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:12:39 compute-0 sudo[272459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:12:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169413128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.826 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.828 254824 DEBUG nova.virt.libvirt.vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:35Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.828 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.829 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.830 254824 DEBUG nova.objects.instance [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.847 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <uuid>588b3b1f-9845-438c-89c4-744f95204b42</uuid>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <name>instance-00000009</name>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-1549098257</nova:name>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:12:38</nova:creationTime>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <nova:port uuid="4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e">
Dec 06 10:12:39 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <system>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="serial">588b3b1f-9845-438c-89c4-744f95204b42</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="uuid">588b3b1f-9845-438c-89c4-744f95204b42</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </system>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <os>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </os>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <features>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </features>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/588b3b1f-9845-438c-89c4-744f95204b42_disk">
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </source>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/588b3b1f-9845-438c-89c4-744f95204b42_disk.config">
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </source>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:12:39 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:6f:25:fa"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <target dev="tap4c8ce68f-8a"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/console.log" append="off"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <video>
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </video>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:12:39 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:12:39 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:12:39 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:12:39 compute-0 nova_compute[254819]: </domain>
Dec 06 10:12:39 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.847 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Preparing to wait for external event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.848 254824 DEBUG nova.virt.libvirt.vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:12:35Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG nova.network.os_vif_util [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.849 254824 DEBUG os_vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.850 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.850 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.851 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.853 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.853 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c8ce68f-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.854 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c8ce68f-8a, col_values=(('external_ids', {'iface-id': '4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:25:fa', 'vm-uuid': '588b3b1f-9845-438c-89c4-744f95204b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:39 compute-0 NetworkManager[48882]: <info>  [1765015959.9038] manager: (tap4c8ce68f-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.903 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.906 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.911 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.912 254824 INFO os_vif [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.973 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:6f:25:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:12:39 compute-0 nova_compute[254819]: 2025-12-06 10:12:39.974 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Using config drive
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.010 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:40 compute-0 ceph-mon[74327]: pgmap v946: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 10:12:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3480538889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3169413128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:12:40 compute-0 sudo[272459]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:12:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:12:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:12:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:12:40 compute-0 sudo[272540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:12:40 compute-0 sudo[272540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:40 compute-0 sudo[272540]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.324 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updated VIF entry in instance network info cache for port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.325 254824 DEBUG nova.network.neutron [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [{"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.343 254824 DEBUG oslo_concurrency.lockutils [req-3dd7f5b3-5dd6-4a26-8c08-0cb75a4a46fe req-832939c5-8563-4c9b-ba33-c3148053159a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-588b3b1f-9845-438c-89c4-744f95204b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:12:40 compute-0 sudo[272565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:12:40 compute-0 sudo[272565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.464 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Creating config drive at /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.471 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tyko1dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.595 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tyko1dy" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.624 254824 DEBUG nova.storage.rbd_utils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 588b3b1f-9845-438c-89c4-744f95204b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:12:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.628 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config 588b3b1f-9845-438c-89c4-744f95204b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.752034568 +0000 UTC m=+0.041597261 container create 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.784 254824 DEBUG oslo_concurrency.processutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config 588b3b1f-9845-438c-89c4-744f95204b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.785 254824 INFO nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deleting local config drive /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42/disk.config because it was imported into RBD.
Dec 06 10:12:40 compute-0 systemd[1]: Started libpod-conmon-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope.
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.734138721 +0000 UTC m=+0.023701434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:40 compute-0 kernel: tap4c8ce68f-8a: entered promiscuous mode
Dec 06 10:12:40 compute-0 NetworkManager[48882]: <info>  [1765015960.8434] manager: (tap4c8ce68f-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec 06 10:12:40 compute-0 ovn_controller[152417]: 2025-12-06T10:12:40Z|00101|binding|INFO|Claiming lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for this chassis.
Dec 06 10:12:40 compute-0 ovn_controller[152417]: 2025-12-06T10:12:40Z|00102|binding|INFO|4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e: Claiming fa:16:3e:6f:25:fa 10.100.0.9
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.846 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.853517641 +0000 UTC m=+0.143080354 container init 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.855 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '588b3b1f-9845-438c-89c4-744f95204b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.856 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 bound to our chassis
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.857 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.865041729 +0000 UTC m=+0.154604422 container start 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.869754996 +0000 UTC m=+0.159317689 container attach 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:12:40 compute-0 ovn_controller[152417]: 2025-12-06T10:12:40Z|00103|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e ovn-installed in OVS
Dec 06 10:12:40 compute-0 ovn_controller[152417]: 2025-12-06T10:12:40Z|00104|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e up in Southbound
Dec 06 10:12:40 compute-0 lucid_hawking[272687]: 167 167
Dec 06 10:12:40 compute-0 systemd[1]: libpod-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope: Deactivated successfully.
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.873175637 +0000 UTC m=+0.162738340 container died 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.871 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0c47f031-5056-484a-ac8e-3b17b4af1392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.872 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2ce21d9-e1 in ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:12:40 compute-0 nova_compute[254819]: 2025-12-06 10:12:40.875 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.877 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2ce21d9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.877 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dfbfa5-66ea-4f64-aa7a-137559b5dd1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.879 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[35deb2f0-c615-4d31-a84c-0aad3d39d80d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 systemd-machined[216202]: New machine qemu-6-instance-00000009.
Dec 06 10:12:40 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000009.
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.891 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[702b6e31-ed35-4b3c-93b4-ef423bc71668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec 06 10:12:40 compute-0 systemd-udevd[272713]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:12:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec 06 10:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c399dfdb8edfc87f3fc79dd3c420a2a4c320c15a4237f8470a731242e769846f-merged.mount: Deactivated successfully.
Dec 06 10:12:40 compute-0 NetworkManager[48882]: <info>  [1765015960.9195] device (tap4c8ce68f-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:12:40 compute-0 NetworkManager[48882]: <info>  [1765015960.9206] device (tap4c8ce68f-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:12:40 compute-0 podman[272668]: 2025-12-06 10:12:40.921835037 +0000 UTC m=+0.211397730 container remove 4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hawking, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.924 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8822aa-2fcc-48b8-8b40-1b77c6cc40ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 systemd[1]: libpod-conmon-4b8042e4230861a6d3581c2088f09c367c797d9b4f30c0c4906c06d56d95d44e.scope: Deactivated successfully.
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.962 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f2d88-2175-4327-a3f1-d9731ea346ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:40 compute-0 NetworkManager[48882]: <info>  [1765015960.9701] manager: (tapc2ce21d9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec 06 10:12:40 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:40.972 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8e11f102-1ca8-4c9a-8220-07ff3c64922c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.015 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c8fbb5-7813-454e-a1bd-e6f47b1ae821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.021 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfc9403-c896-4b42-87fb-8b4e166892b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 NetworkManager[48882]: <info>  [1765015961.0550] device (tapc2ce21d9-e0): carrier: link connected
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:12:41 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.060 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[58b78731-8a50-49d0-80d4-0144cb3f8cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.082 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[942687c4-b87b-46bd-b35e-7492a053e677]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433153, 'reachable_time': 37066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272756, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.100 254824 DEBUG nova.compute.manager [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.100 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG oslo_concurrency.lockutils [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.101 254824 DEBUG nova.compute.manager [req-36a3df96-34c4-4dfc-968f-3952ed99be2f req-dc4e44df-2254-4ce0-ab92-e9f8b3da98e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Processing event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.108 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5875606a-2bc1-4b95-aa24-58590aa98390]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:5864'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433153, 'tstamp': 433153}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272763, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.129 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4c500899-988f-4102-97a2-65638e870f4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2ce21d9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:58:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433153, 'reachable_time': 37066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272769, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.132868487 +0000 UTC m=+0.056163772 container create ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.150 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.173 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d598bfe5-0f35-4dec-819d-98e47b63df80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 systemd[1]: Started libpod-conmon-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope.
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.110131459 +0000 UTC m=+0.033426764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.245791286 +0000 UTC m=+0.169086591 container init ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.260 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc02cba-d3d5-4858-83d1-949f79bdfbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.262 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.263 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.263 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2ce21d9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.264430863 +0000 UTC m=+0.187726168 container start ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 10:12:41 compute-0 kernel: tapc2ce21d9-e0: entered promiscuous mode
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.265 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.268191614 +0000 UTC m=+0.191486919 container attach ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:12:41 compute-0 NetworkManager[48882]: <info>  [1765015961.2681] manager: (tapc2ce21d9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.278 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2ce21d9-e0, col_values=(('external_ids', {'iface-id': '52d33d15-d96f-4c26-a63e-0415fca27e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:41 compute-0 ovn_controller[152417]: 2025-12-06T10:12:41Z|00105|binding|INFO|Releasing lport 52d33d15-d96f-4c26-a63e-0415fca27e6a from this chassis (sb_readonly=0)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.280 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.282 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.283 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b650734-047c-41ab-a05f-a13a3d664431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.284 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/c2ce21d9-e711-470f-89f6-0db58ded70b9.pid.haproxy
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID c2ce21d9-e711-470f-89f6-0db58ded70b9
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:12:41 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:41.286 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'env', 'PROCESS_TAG=haproxy-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2ce21d9-e711-470f-89f6-0db58ded70b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.296 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.323 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.326 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3213038, 588b3b1f-9845-438c-89c4-744f95204b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.327 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Started (Lifecycle Event)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.334 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.340 254824 INFO nova.virt.libvirt.driver [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance spawned successfully.
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.342 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.365 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.375 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.383 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.383 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.384 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.385 254824 DEBUG nova.virt.libvirt.driver [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.410 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.410 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3217614, 588b3b1f-9845-438c-89c4-744f95204b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.411 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Paused (Lifecycle Event)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.436 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.441 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015961.3327663, 588b3b1f-9845-438c-89c4-744f95204b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.441 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Resumed (Lifecycle Event)
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.445 254824 INFO nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 5.85 seconds to spawn the instance on the hypervisor.
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.446 254824 DEBUG nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.457 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.461 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.486 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.529 254824 INFO nova.compute.manager [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 6.82 seconds to build instance.
Dec 06 10:12:41 compute-0 nova_compute[254819]: 2025-12-06 10:12:41.556 254824 DEBUG oslo_concurrency.lockutils [None req-1009fe59-3b29-4500-9e9a-a8857be734ff 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:41 compute-0 beautiful_gates[272809]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:12:41 compute-0 beautiful_gates[272809]: --> All data devices are unavailable
Dec 06 10:12:41 compute-0 systemd[1]: libpod-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope: Deactivated successfully.
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.647948413 +0000 UTC m=+0.571243708 container died ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-707b1cd2d285236a77553f12195ea5045033040ec29f70fa510149f885d44ba1-merged.mount: Deactivated successfully.
Dec 06 10:12:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:41.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:41 compute-0 podman[272754]: 2025-12-06 10:12:41.697529548 +0000 UTC m=+0.620824843 container remove ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 10:12:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:41 compute-0 systemd[1]: libpod-conmon-ec98dd63e4507a77d3a40a0a8f6f013f70db2ed1fafc1e08325f5c9c8527f599.scope: Deactivated successfully.
Dec 06 10:12:41 compute-0 sudo[272565]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:41 compute-0 podman[272864]: 2025-12-06 10:12:41.754447019 +0000 UTC m=+0.069895689 container create 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:12:41 compute-0 systemd[1]: Started libpod-conmon-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope.
Dec 06 10:12:41 compute-0 podman[272864]: 2025-12-06 10:12:41.722192508 +0000 UTC m=+0.037641228 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:12:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:41 compute-0 sudo[272886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030cc584701fc2ae4a0e4246d98dd4c32466f55ec245b3b9cff9771d81d5672e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:41 compute-0 sudo[272886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:41 compute-0 sudo[272886]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:41 compute-0 podman[272864]: 2025-12-06 10:12:41.854796032 +0000 UTC m=+0.170244722 container init 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:12:41 compute-0 podman[272864]: 2025-12-06 10:12:41.861714737 +0000 UTC m=+0.177163407 container start 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:12:41 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : New worker (272942) forked
Dec 06 10:12:41 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : Loading success.
Dec 06 10:12:41 compute-0 sudo[272916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:12:41 compute-0 sudo[272916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:42 compute-0 ceph-mon[74327]: pgmap v947: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.421494957 +0000 UTC m=+0.054736123 container create 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 10:12:42 compute-0 systemd[1]: Started libpod-conmon-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope.
Dec 06 10:12:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.402583762 +0000 UTC m=+0.035824958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.516682141 +0000 UTC m=+0.149923317 container init 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.525529828 +0000 UTC m=+0.158770994 container start 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.528390075 +0000 UTC m=+0.161631271 container attach 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:12:42 compute-0 adoring_elgamal[273010]: 167 167
Dec 06 10:12:42 compute-0 systemd[1]: libpod-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope: Deactivated successfully.
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.533680295 +0000 UTC m=+0.166921471 container died 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 10:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b20e05dda0fcaf071dfcacb7a162261420d8334a6e6341a4950029f156cc8a-merged.mount: Deactivated successfully.
Dec 06 10:12:42 compute-0 podman[272994]: 2025-12-06 10:12:42.567978693 +0000 UTC m=+0.201219889 container remove 01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:12:42 compute-0 systemd[1]: libpod-conmon-01206abb147f5c55873d29187dbb93a8f0af14f8245a6023a471635680bab193.scope: Deactivated successfully.
Dec 06 10:12:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:42 compute-0 podman[273036]: 2025-12-06 10:12:42.785610338 +0000 UTC m=+0.053976923 container create f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 10:12:42 compute-0 systemd[1]: Started libpod-conmon-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope.
Dec 06 10:12:42 compute-0 podman[273036]: 2025-12-06 10:12:42.759440959 +0000 UTC m=+0.027807594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:42 compute-0 podman[273036]: 2025-12-06 10:12:42.917026661 +0000 UTC m=+0.185393326 container init f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:12:42 compute-0 podman[273036]: 2025-12-06 10:12:42.924994164 +0000 UTC m=+0.193360749 container start f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 10:12:42 compute-0 podman[273036]: 2025-12-06 10:12:42.928527758 +0000 UTC m=+0.196894433 container attach f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.042 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.043 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.043 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.044 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.045 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.047 254824 INFO nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Terminating instance
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.049 254824 DEBUG nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:12:43 compute-0 ceph-mon[74327]: pgmap v948: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:12:43 compute-0 kernel: tap4c8ce68f-8a (unregistering): left promiscuous mode
Dec 06 10:12:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:43 compute-0 NetworkManager[48882]: <info>  [1765015963.1071] device (tap4c8ce68f-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:12:43 compute-0 ovn_controller[152417]: 2025-12-06T10:12:43Z|00106|binding|INFO|Releasing lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e from this chassis (sb_readonly=0)
Dec 06 10:12:43 compute-0 ovn_controller[152417]: 2025-12-06T10:12:43Z|00107|binding|INFO|Setting lport 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e down in Southbound
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.117 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 ovn_controller[152417]: 2025-12-06T10:12:43Z|00108|binding|INFO|Removing iface tap4c8ce68f-8a ovn-installed in OVS
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.127 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:25:fa 10.100.0.9'], port_security=['fa:16:3e:6f:25:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '588b3b1f-9845-438c-89c4-744f95204b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1269654245', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1e7cc18e-31f3-4bdb-821d-1683a210c530', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e5b40-935f-42c8-a85f-385c1c7048be, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.129 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e in datapath c2ce21d9-e711-470f-89f6-0db58ded70b9 unbound from our chassis
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.130 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2ce21d9-e711-470f-89f6-0db58ded70b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.132 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a1282c-5488-4f73-a411-0540e282a538]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.132 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 namespace which is not needed anymore
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.143 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 06 10:12:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Consumed 2.231s CPU time.
Dec 06 10:12:43 compute-0 systemd-machined[216202]: Machine qemu-6-instance-00000009 terminated.
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.184 254824 DEBUG nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.186 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG oslo_concurrency.lockutils [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 DEBUG nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.187 254824 WARNING nova.compute.manager [req-06ec377c-8edc-4ae8-a493-83fd386227fe req-481b14d0-280e-4094-b2dc-46be495b2043 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state active and task_state deleting.
Dec 06 10:12:43 compute-0 recursing_jackson[273052]: {
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:     "1": [
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:         {
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "devices": [
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "/dev/loop3"
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             ],
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "lv_name": "ceph_lv0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "lv_size": "21470642176",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "name": "ceph_lv0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "tags": {
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.cluster_name": "ceph",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.crush_device_class": "",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.encrypted": "0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.osd_id": "1",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.type": "block",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.vdo": "0",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:                 "ceph.with_tpm": "0"
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             },
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "type": "block",
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:             "vg_name": "ceph_vg0"
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:         }
Dec 06 10:12:43 compute-0 recursing_jackson[273052]:     ]
Dec 06 10:12:43 compute-0 recursing_jackson[273052]: }
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.274 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.289 254824 INFO nova.virt.libvirt.driver [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Instance destroyed successfully.
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.290 254824 DEBUG nova.objects.instance [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 588b3b1f-9845-438c-89c4-744f95204b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.301 254824 DEBUG nova.virt.libvirt.vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1549098257',display_name='tempest-TestNetworkBasicOps-server-1549098257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1549098257',id=9,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjlfXiWeP25/+Al9avXS7k5sTY7UpSTwvIPTlqQIhh0XClSeVPzmFV420fI5WFwr8qS2zHe5RQB0WDD7hpreK+FV5EzKAwwCW1d4oQG8NLOPL6t68qoP/9Hs+y9Im3qyA==',key_name='tempest-TestNetworkBasicOps-1342068066',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:12:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-8kktnhof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:12:41Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=588b3b1f-9845-438c-89c4-744f95204b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.302 254824 DEBUG nova.network.os_vif_util [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "address": "fa:16:3e:6f:25:fa", "network": {"id": "c2ce21d9-e711-470f-89f6-0db58ded70b9", "bridge": "br-int", "label": "tempest-network-smoke--1291548226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c8ce68f-8a", "ovs_interfaceid": "4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.303 254824 DEBUG nova.network.os_vif_util [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.303 254824 DEBUG os_vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:12:43 compute-0 systemd[1]: libpod-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope: Deactivated successfully.
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.306 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.307 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c8ce68f-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:43 compute-0 podman[273036]: 2025-12-06 10:12:43.310203239 +0000 UTC m=+0.578569814 container died f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.312 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.316 254824 INFO os_vif [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:25:fa,bridge_name='br-int',has_traffic_filtering=True,id=4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e,network=Network(c2ce21d9-e711-470f-89f6-0db58ded70b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4c8ce68f-8a')
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : haproxy version is 2.8.14-c23fe91
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [NOTICE]   (272923) : path to executable is /usr/sbin/haproxy
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : Exiting Master process...
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : Exiting Master process...
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [ALERT]    (272923) : Current worker (272942) exited with code 143 (Terminated)
Dec 06 10:12:43 compute-0 neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9[272908]: [WARNING]  (272923) : All workers exited. Exiting... (0)
Dec 06 10:12:43 compute-0 systemd[1]: libpod-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope: Deactivated successfully.
Dec 06 10:12:43 compute-0 podman[273084]: 2025-12-06 10:12:43.334099158 +0000 UTC m=+0.066452378 container died 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25a8435fef6b8002b408b06eb4080034d79f4a96d40c43571ee11fdfe74ddfb-merged.mount: Deactivated successfully.
Dec 06 10:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1-userdata-shm.mount: Deactivated successfully.
Dec 06 10:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-030cc584701fc2ae4a0e4246d98dd4c32466f55ec245b3b9cff9771d81d5672e-merged.mount: Deactivated successfully.
Dec 06 10:12:43 compute-0 podman[273036]: 2025-12-06 10:12:43.384634858 +0000 UTC m=+0.653001443 container remove f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:12:43 compute-0 podman[273084]: 2025-12-06 10:12:43.39069367 +0000 UTC m=+0.123046890 container cleanup 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:12:43 compute-0 systemd[1]: libpod-conmon-81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1.scope: Deactivated successfully.
Dec 06 10:12:43 compute-0 systemd[1]: libpod-conmon-f09fd1a064540cd79e2116d5a73ad3a4f20c4a8b0a5a594555ff9906415e756c.scope: Deactivated successfully.
Dec 06 10:12:43 compute-0 sudo[272916]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:43 compute-0 podman[273162]: 2025-12-06 10:12:43.468569391 +0000 UTC m=+0.049019970 container remove 81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:12:43 compute-0 podman[273140]: 2025-12-06 10:12:43.470162814 +0000 UTC m=+0.094085206 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.478 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[69ef7658-05b0-426f-83f3-55c42270fe56]: (4, ('Sat Dec  6 10:12:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1)\n81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1\nSat Dec  6 10:12:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 (81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1)\n81cbcd116438af394cc8310bac2d1195c7e71d650f55c26ddd848762038f94d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.482 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[404700d2-cbee-4289-810b-4c95ceacc00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.483 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2ce21d9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:43 compute-0 sudo[273185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:12:43 compute-0 sudo[273185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:43 compute-0 sudo[273185]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.532 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 kernel: tapc2ce21d9-e0: left promiscuous mode
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.535 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.538 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e59d1786-18fe-410a-ad0a-e72af56c6d6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.558 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[021c0d26-4bf3-4257-a9ab-8401f52aebe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.560 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[03934674-8366-4dfc-9f62-2ce704013e34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.579 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe5f821-f1fc-4339-991c-304ce3cb05a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433144, 'reachable_time': 16542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273234, 'error': None, 'target': 'ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc2ce21d9\x2de711\x2d470f\x2d89f6\x2d0db58ded70b9.mount: Deactivated successfully.
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.585 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2ce21d9-e711-470f-89f6-0db58ded70b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:12:43 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:43.585 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1a13a6-3f24-43e3-8db1-ee3a6653d029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:12:43 compute-0 sudo[273214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:12:43 compute-0 sudo[273214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.734 254824 INFO nova.virt.libvirt.driver [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deleting instance files /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42_del
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.735 254824 INFO nova.virt.libvirt.driver [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deletion of /var/lib/nova/instances/588b3b1f-9845-438c-89c4-744f95204b42_del complete
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.880 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.881 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.906 254824 INFO nova.compute.manager [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 0.86 seconds to destroy the instance on the hypervisor.
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.907 254824 DEBUG oslo.service.loopingcall [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.907 254824 DEBUG nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:12:43 compute-0 nova_compute[254819]: 2025-12-06 10:12:43.908 254824 DEBUG nova.network.neutron [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.06754037 +0000 UTC m=+0.045973289 container create 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:12:44 compute-0 systemd[1]: Started libpod-conmon-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope.
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.047159345 +0000 UTC m=+0.025592294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.177892639 +0000 UTC m=+0.156325708 container init 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.186580542 +0000 UTC m=+0.165013441 container start 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.190064305 +0000 UTC m=+0.168497254 container attach 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:12:44 compute-0 serene_haslett[273324]: 167 167
Dec 06 10:12:44 compute-0 systemd[1]: libpod-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope: Deactivated successfully.
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.192861549 +0000 UTC m=+0.171294448 container died 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 10:12:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 10:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-13cbdb1bc3f525d1decc1c450ce49b569783fdc5ec360b025c321d6bdab447e4-merged.mount: Deactivated successfully.
Dec 06 10:12:44 compute-0 podman[273305]: 2025-12-06 10:12:44.239973929 +0000 UTC m=+0.218406838 container remove 26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_haslett, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 10:12:44 compute-0 systemd[1]: libpod-conmon-26a9d158b036a450a66b9cda3da457015942b692fd9f0a2ff517899018fff6af.scope: Deactivated successfully.
Dec 06 10:12:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197267841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.353 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:44 compute-0 podman[273348]: 2025-12-06 10:12:44.415552121 +0000 UTC m=+0.051433976 container create 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 10:12:44 compute-0 systemd[1]: Started libpod-conmon-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope.
Dec 06 10:12:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:12:44 compute-0 podman[273348]: 2025-12-06 10:12:44.395609908 +0000 UTC m=+0.031491793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:12:44 compute-0 podman[273348]: 2025-12-06 10:12:44.49819232 +0000 UTC m=+0.134074165 container init 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:12:44 compute-0 podman[273348]: 2025-12-06 10:12:44.512625235 +0000 UTC m=+0.148507130 container start 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:12:44 compute-0 podman[273348]: 2025-12-06 10:12:44.516622382 +0000 UTC m=+0.152504237 container attach 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.529 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.532 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.532 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.533 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.615 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 588b3b1f-9845-438c-89c4-744f95204b42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.616 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.616 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:12:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.731 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:44.897 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:12:44 compute-0 nova_compute[254819]: 2025-12-06 10:12:44.898 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:44 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:44.900 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.014 254824 DEBUG nova.network.neutron [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.075 254824 INFO nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Took 1.17 seconds to deallocate network for instance.
Dec 06 10:12:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.123 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:45 compute-0 lvm[273460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:12:45 compute-0 lvm[273460]: VG ceph_vg0 finished
Dec 06 10:12:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219237189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.250 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.257 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:45 compute-0 ceph-mon[74327]: pgmap v949: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 10:12:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3197267841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2219237189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:45 compute-0 sweet_diffie[273366]: {}
Dec 06 10:12:45 compute-0 systemd[1]: libpod-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Deactivated successfully.
Dec 06 10:12:45 compute-0 systemd[1]: libpod-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Consumed 1.315s CPU time.
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.408 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:45 compute-0 podman[273466]: 2025-12-06 10:12:45.411134059 +0000 UTC m=+0.031733589 container died 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.434 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.434 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.435 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f10f08cd45621bb998d32e6e4cae0fa6d1c886bdd2f9aaa4cad450e4811a2740-merged.mount: Deactivated successfully.
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.443 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.443 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.444 254824 WARNING nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-unplugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state deleted and task_state None.
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "588b3b1f-9845-438c-89c4-744f95204b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG oslo_concurrency.lockutils [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.445 254824 DEBUG nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] No waiting events found dispatching network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.446 254824 WARNING nova.compute.manager [req-cf4aaa5e-4e94-4ad1-96df-6c40075135e6 req-9febff06-4790-4dff-8bb6-729e82a6cbe8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Received unexpected event network-vif-plugged-4c8ce68f-8ad3-4266-aa7d-5ad833b39c1e for instance with vm_state deleted and task_state None.
Dec 06 10:12:45 compute-0 podman[273466]: 2025-12-06 10:12:45.46688853 +0000 UTC m=+0.087488060 container remove 3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:12:45 compute-0 systemd[1]: libpod-conmon-3f71d1d9efe7262851c2c0d2adc7347a1ee578424b451e508fd309396bf05f07.scope: Deactivated successfully.
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.494 254824 DEBUG oslo_concurrency.processutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:45 compute-0 sudo[273214]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:12:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:12:45 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:45 compute-0 sudo[273483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:12:45 compute-0 sudo[273483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:45 compute-0 sudo[273483]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:45.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784791036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.988 254824 DEBUG oslo_concurrency.processutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:45 compute-0 nova_compute[254819]: 2025-12-06 10:12:45.997 254824 DEBUG nova.compute.provider_tree [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.020 254824 DEBUG nova.scheduler.client.report [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:12:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:12:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:12:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.044 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.087 254824 INFO nova.scheduler.client.report [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 588b3b1f-9845-438c-89c4-744f95204b42
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.151 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.193 254824 DEBUG oslo_concurrency.lockutils [None req-59ce2588-24bb-4ec9-9760-4fee01e8527c 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "588b3b1f-9845-438c-89c4-744f95204b42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.435 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.436 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.436 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2886513219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2784791036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:12:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3659217674' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:12:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.762 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.763 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:46 compute-0 nova_compute[254819]: 2025-12-06 10:12:46.763 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:47 compute-0 ceph-mon[74327]: pgmap v950: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 10:12:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1462413702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:47.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:47.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:47 compute-0 nova_compute[254819]: 2025-12-06 10:12:47.756 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 06 10:12:48 compute-0 nova_compute[254819]: 2025-12-06 10:12:48.356 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:48 compute-0 nova_compute[254819]: 2025-12-06 10:12:48.765 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:49.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4004290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:49 compute-0 podman[273533]: 2025-12-06 10:12:49.49736924 +0000 UTC m=+0.107945786 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:12:49 compute-0 ceph-mon[74327]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 06 10:12:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:49.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:49 compute-0 nova_compute[254819]: 2025-12-06 10:12:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:49 compute-0 nova_compute[254819]: 2025-12-06 10:12:49.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:12:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.819302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969819449, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 255, "total_data_size": 2258539, "memory_usage": 2305904, "flush_reason": "Manual Compaction"}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969841310, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2208581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26987, "largest_seqno": 28273, "table_properties": {"data_size": 2202622, "index_size": 3222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12816, "raw_average_key_size": 19, "raw_value_size": 2190437, "raw_average_value_size": 3308, "num_data_blocks": 142, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015855, "oldest_key_time": 1765015855, "file_creation_time": 1765015969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22276 microseconds, and 7575 cpu microseconds.
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.841378) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2208581 bytes OK
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.841648) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845148) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845168) EVENT_LOG_v1 {"time_micros": 1765015969845162, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.845195) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2252820, prev total WAL file size 2252820, number of live WAL files 2.
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.846181) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2156KB)], [59(14MB)]
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969846293, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17656653, "oldest_snapshot_seqno": -1}
Dec 06 10:12:49 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:49.903 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6027 keys, 17524888 bytes, temperature: kUnknown
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969987730, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17524888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17480988, "index_size": 27726, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153672, "raw_average_key_size": 25, "raw_value_size": 17368468, "raw_average_value_size": 2881, "num_data_blocks": 1135, "num_entries": 6027, "num_filter_entries": 6027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765015969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.988014) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17524888 bytes
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.989708) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.8 rd, 123.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 14.7 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(15.9) write-amplify(7.9) OK, records in: 6553, records dropped: 526 output_compression: NoCompression
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.989724) EVENT_LOG_v1 {"time_micros": 1765015969989716, "job": 32, "event": "compaction_finished", "compaction_time_micros": 141520, "compaction_time_cpu_micros": 50701, "output_level": 6, "num_output_files": 1, "total_output_size": 17524888, "num_input_records": 6553, "num_output_records": 6027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969990158, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015969992470, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.846006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:49 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:12:49.992572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:12:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec 06 10:12:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4068013306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec 06 10:12:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Dec 06 10:12:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:51 compute-0 nova_compute[254819]: 2025-12-06 10:12:51.153 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:51 compute-0 ceph-mon[74327]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec 06 10:12:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/219018723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:12:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 06 10:12:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:53 compute-0 nova_compute[254819]: 2025-12-06 10:12:53.359 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:53 compute-0 sudo[273559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:12:53 compute-0 sudo[273559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:12:53 compute-0 sudo[273559]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:53 compute-0 nova_compute[254819]: 2025-12-06 10:12:53.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:53 compute-0 nova_compute[254819]: 2025-12-06 10:12:53.681 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:12:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:12:53 compute-0 ceph-mon[74327]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 06 10:12:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:12:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 06 10:12:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.243 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:12:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:12:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:55 compute-0 ceph-mon[74327]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 06 10:12:56 compute-0 nova_compute[254819]: 2025-12-06 10:12:56.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:12:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:57.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:12:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:57.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:12:57 compute-0 ceph-mon[74327]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:12:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:12:58 compute-0 nova_compute[254819]: 2025-12-06 10:12:58.288 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765015963.2865648, 588b3b1f-9845-438c-89c4-744f95204b42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:12:58 compute-0 nova_compute[254819]: 2025-12-06 10:12:58.288 254824 INFO nova.compute.manager [-] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] VM Stopped (Lifecycle Event)
Dec 06 10:12:58 compute-0 nova_compute[254819]: 2025-12-06 10:12:58.306 254824 DEBUG nova.compute.manager [None req-2d55263f-51c1-44b6-932a-640fdd44757e - - - - - -] [instance: 588b3b1f-9845-438c-89c4-744f95204b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:12:58 compute-0 nova_compute[254819]: 2025-12-06 10:12:58.391 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:12:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:12:59.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:12:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:12:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:12:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:12:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:12:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:12:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:12:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:12:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:12:59 compute-0 ceph-mon[74327]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 10:13:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:13:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:13:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:01 compute-0 nova_compute[254819]: 2025-12-06 10:13:01.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 10:13:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:01.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 10:13:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:02 compute-0 ceph-mon[74327]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:03 compute-0 ceph-mon[74327]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:03 compute-0 nova_compute[254819]: 2025-12-06 10:13:03.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:03.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:13:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8001f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:05 compute-0 ceph-mon[74327]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:13:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:05.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:06 compute-0 nova_compute[254819]: 2025-12-06 10:13:06.159 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:07 compute-0 ceph-mon[74327]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:07.647Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.437 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.710 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.711 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.724 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:13:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.790 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.790 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.797 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.797 254824 INFO nova.compute.claims [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:13:08 compute-0 nova_compute[254819]: 2025-12-06 10:13:08.881 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:13:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:09.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673570831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:09 compute-0 ceph-mon[74327]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:13:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.336 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.343 254824 DEBUG nova.compute.provider_tree [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.366 254824 DEBUG nova.scheduler.client.report [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.387 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.387 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.441 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.441 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:13:09 compute-0 podman[273623]: 2025-12-06 10:13:09.456881662 +0000 UTC m=+0.083928304 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.503 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.519 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.587 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.588 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.588 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating image(s)
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.617 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.649 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.677 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.680 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.731 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.732 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.733 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.733 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.756 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:09.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:09 compute-0 nova_compute[254819]: 2025-12-06 10:13:09.759 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.011 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.073 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.167 254824 DEBUG nova.objects.instance [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.187 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.187 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Ensure instance console log exists: /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.188 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:10 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3673570831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:10 compute-0 nova_compute[254819]: 2025-12-06 10:13:10.391 254824 DEBUG nova.policy [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:13:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:13:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:13:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:11 compute-0 nova_compute[254819]: 2025-12-06 10:13:11.164 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:11 compute-0 ceph-mon[74327]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:11 compute-0 nova_compute[254819]: 2025-12-06 10:13:11.621 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Successfully created port: 923b504a-09da-476b-a8c8-c6c76c5e8343 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:13:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.061 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Successfully updated port: 923b504a-09da-476b-a8c8-c6c76c5e8343 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.072 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.073 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.073 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:13:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG nova.compute.manager [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG nova.compute.manager [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.174 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.248 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:13:13 compute-0 ceph-mon[74327]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:13:13 compute-0 nova_compute[254819]: 2025-12-06 10:13:13.441 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:13 compute-0 sudo[273814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:13:13 compute-0 sudo[273814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:13 compute-0 sudo[273814]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:13 compute-0 podman[273838]: 2025-12-06 10:13:13.673723253 +0000 UTC m=+0.141120102 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 10:13:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.107 254824 DEBUG nova.network.neutron [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.135 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.136 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance network_info: |[{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.136 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.137 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.142 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start _get_guest_xml network_info=[{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.148 254824 WARNING nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.154 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.154 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.165 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.166 254824 DEBUG nova.virt.libvirt.host [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.166 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.167 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.168 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.168 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.169 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.170 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.170 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.171 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.171 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.172 254824 DEBUG nova.virt.hardware [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.177 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:13:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:13:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883968166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:13:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.674 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.711 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:14 compute-0 nova_compute[254819]: 2025-12-06 10:13:14.718 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:13:15 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840050543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.199 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.202 254824 DEBUG nova.virt.libvirt.vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:13:09Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.202 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.204 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.205 254824 DEBUG nova.objects.instance [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.229 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <uuid>b735e225-377d-4f50-aae2-4bf5dd4eb9fa</uuid>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <name>instance-0000000a</name>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-767347043</nova:name>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:13:14</nova:creationTime>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <nova:port uuid="923b504a-09da-476b-a8c8-c6c76c5e8343">
Dec 06 10:13:15 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <system>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="serial">b735e225-377d-4f50-aae2-4bf5dd4eb9fa</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="uuid">b735e225-377d-4f50-aae2-4bf5dd4eb9fa</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </system>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <os>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </os>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <features>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </features>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk">
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </source>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config">
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </source>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:13:15 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:b7:ab:4e"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <target dev="tap923b504a-09"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/console.log" append="off"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <video>
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </video>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:13:15 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:13:15 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:13:15 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:13:15 compute-0 nova_compute[254819]: </domain>
Dec 06 10:13:15 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.230 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Preparing to wait for external event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.231 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.232 254824 DEBUG nova.virt.libvirt.vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:13:09Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.233 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.233 254824 DEBUG nova.network.os_vif_util [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.234 254824 DEBUG os_vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.234 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.235 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.235 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.239 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap923b504a-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.239 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap923b504a-09, col_values=(('external_ids', {'iface-id': '923b504a-09da-476b-a8c8-c6c76c5e8343', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:ab:4e', 'vm-uuid': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:15 compute-0 NetworkManager[48882]: <info>  [1765015995.2417] manager: (tap923b504a-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.243 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.248 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.249 254824 INFO os_vif [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09')
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.290 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.290 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.291 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:b7:ab:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.291 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Using config drive
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.317 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.345 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.345 254824 DEBUG nova.network.neutron [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.357 254824 DEBUG oslo_concurrency.lockutils [req-57b84257-7f3a-440e-b000-a3eb14c06090 req-1cfac919-55bc-4c59-8e19-531723fe731b d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:13:15 compute-0 ceph-mon[74327]: pgmap v964: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:13:15 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1883968166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:13:15 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/840050543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.619 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Creating config drive at /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.630 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5q5fp3b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.764 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5q5fp3b" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:15.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.800 254824 DEBUG nova.storage.rbd_utils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.804 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.984 254824 DEBUG oslo_concurrency.processutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config b735e225-377d-4f50-aae2-4bf5dd4eb9fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:15 compute-0 nova_compute[254819]: 2025-12-06 10:13:15.985 254824 INFO nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deleting local config drive /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa/disk.config because it was imported into RBD.
Dec 06 10:13:16 compute-0 kernel: tap923b504a-09: entered promiscuous mode
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.0476] manager: (tap923b504a-09): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec 06 10:13:16 compute-0 ovn_controller[152417]: 2025-12-06T10:13:16Z|00109|binding|INFO|Claiming lport 923b504a-09da-476b-a8c8-c6c76c5e8343 for this chassis.
Dec 06 10:13:16 compute-0 ovn_controller[152417]: 2025-12-06T10:13:16Z|00110|binding|INFO|923b504a-09da-476b-a8c8-c6c76c5e8343: Claiming fa:16:3e:b7:ab:4e 10.100.0.5
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.081 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ab:4e 10.100.0.5'], port_security=['fa:16:3e:b7:ab:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-565d9ab5-f943-4873-8a20-970fba448d46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07ac2c97-c1ea-402b-a4af-4b99fec7720e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86948586-50a4-4571-ad91-ae78b72ed8de, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=923b504a-09da-476b-a8c8-c6c76c5e8343) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.082 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 923b504a-09da-476b-a8c8-c6c76c5e8343 in datapath 565d9ab5-f943-4873-8a20-970fba448d46 bound to our chassis
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.083 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 565d9ab5-f943-4873-8a20-970fba448d46
Dec 06 10:13:16 compute-0 systemd-udevd[274001]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.099 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7ffb27-081d-4714-93ca-5438ab16999c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.100 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap565d9ab5-f1 in ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.102 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap565d9ab5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.102 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6d535c95-1ad6-4716-bf42-4bc3b1fc978c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.103 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f7ea15-171d-444c-a516-b9d961ffafb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.1098] device (tap923b504a-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.1110] device (tap923b504a-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.116 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[05555532-ed56-4482-8abe-b200a48379f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 systemd-machined[216202]: New machine qemu-7-instance-0000000a.
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.147 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ed4a6a-cb09-46ec-9db2-e35f79ef3b29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.149 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000a.
Dec 06 10:13:16 compute-0 ovn_controller[152417]: 2025-12-06T10:13:16Z|00111|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 ovn-installed in OVS
Dec 06 10:13:16 compute-0 ovn_controller[152417]: 2025-12-06T10:13:16Z|00112|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 up in Southbound
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.165 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.177 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[a61a9882-11d7-4fd0-9b34-538edd075b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.1871] manager: (tap565d9ab5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.186 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f5dd67-29be-4517-bfb7-0723f6b2f87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.238 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2f504f34-0d07-4502-ad5a-da85fe319690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.242 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2a120d21-462a-46ae-a391-4cc3a6c0dda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.2730] device (tap565d9ab5-f0): carrier: link connected
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.281 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[649d8d65-36de-420e-9f4a-8dd7964ad300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.302 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7da3e5-1c80-4eed-9ba4-538372c97d0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap565d9ab5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:2f:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436675, 'reachable_time': 42843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274037, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.319 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[3962fbba-b32b-4ac3-8811-834635a53b63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:2f30'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436675, 'tstamp': 436675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274038, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.338 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb912b4-ce20-4ceb-9d11-5de09cd6da78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap565d9ab5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:2f:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436675, 'reachable_time': 42843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274039, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.363 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[89caefdd-2acd-4847-9c94-e28f8137fbc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.438 254824 DEBUG nova.compute.manager [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.439 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.440 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.440 254824 DEBUG oslo_concurrency.lockutils [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.441 254824 DEBUG nova.compute.manager [req-d874fb07-ad3c-453c-9bdb-cd4ddc135393 req-8506cc0e-ce53-4b50-bb10-9dec860efed5 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Processing event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.454 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[17b23f74-7bde-47f7-aafc-614a3c8ba420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap565d9ab5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.456 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap565d9ab5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:16 compute-0 kernel: tap565d9ab5-f0: entered promiscuous mode
Dec 06 10:13:16 compute-0 NetworkManager[48882]: <info>  [1765015996.4598] manager: (tap565d9ab5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.458 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.462 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap565d9ab5-f0, col_values=(('external_ids', {'iface-id': '6aa255c1-2a72-4002-8ac0-9542a75d99f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:16 compute-0 ovn_controller[152417]: 2025-12-06T10:13:16Z|00113|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.466 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.467 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c5592908-9f74-472f-9999-e624c8329ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.467 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-565d9ab5-f943-4873-8a20-970fba448d46
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/565d9ab5-f943-4873-8a20-970fba448d46.pid.haproxy
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID 565d9ab5-f943-4873-8a20-970fba448d46
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:13:16 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:16.469 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'env', 'PROCESS_TAG=haproxy-565d9ab5-f943-4873-8a20-970fba448d46', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/565d9ab5-f943-4873-8a20-970fba448d46.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.480 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.825 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8246639, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.826 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Started (Lifecycle Event)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.830 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.835 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.839 254824 INFO nova.virt.libvirt.driver [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance spawned successfully.
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.840 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.847 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.852 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:13:16 compute-0 podman[274111]: 2025-12-06 10:13:16.864307135 +0000 UTC m=+0.079468234 container create 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.873 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.874 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.875 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.875 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.876 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.876 254824 DEBUG nova.virt.libvirt.driver [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.881 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.882 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8252559, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.882 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Paused (Lifecycle Event)
Dec 06 10:13:16 compute-0 podman[274111]: 2025-12-06 10:13:16.827949893 +0000 UTC m=+0.043111022 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.919 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.926 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765015996.8341808, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.927 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Resumed (Lifecycle Event)
Dec 06 10:13:16 compute-0 systemd[1]: Started libpod-conmon-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope.
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.955 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.962 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:13:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.970 254824 INFO nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 7.38 seconds to spawn the instance on the hypervisor.
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.970 254824 DEBUG nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9928877ca56a0e966d3eea9b89794c7d2e32547dafcfc2eff997d385c12891b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:16 compute-0 nova_compute[254819]: 2025-12-06 10:13:16.988 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:13:16 compute-0 podman[274111]: 2025-12-06 10:13:16.998376668 +0000 UTC m=+0.213537867 container init 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:13:17 compute-0 podman[274111]: 2025-12-06 10:13:17.004138912 +0000 UTC m=+0.219300051 container start 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:13:17 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : New worker (274133) forked
Dec 06 10:13:17 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : Loading success.
Dec 06 10:13:17 compute-0 nova_compute[254819]: 2025-12-06 10:13:17.051 254824 INFO nova.compute.manager [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 8.29 seconds to build instance.
Dec 06 10:13:17 compute-0 nova_compute[254819]: 2025-12-06 10:13:17.071 254824 DEBUG oslo_concurrency.lockutils [None req-6d7fed1a-df52-41f6-9d32-8f1b11efac77 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:17 compute-0 ceph-mon[74327]: pgmap v965: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:13:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:17.648Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:13:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:17.649Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.511 254824 DEBUG nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.511 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG oslo_concurrency.lockutils [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 DEBUG nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:13:18 compute-0 nova_compute[254819]: 2025-12-06 10:13:18.512 254824 WARNING nova.compute.manager [req-8e3dfe6b-c681-434b-a8e8-830b9512cde0 req-9503adb6-3fdf-40f3-8bbd-a77cd09db5b4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received unexpected event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with vm_state active and task_state None.
Dec 06 10:13:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:19.026Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:13:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:19.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:13:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:19 compute-0 ceph-mon[74327]: pgmap v966: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:19.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:19.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:19 compute-0 nova_compute[254819]: 2025-12-06 10:13:19.919 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:19 compute-0 NetworkManager[48882]: <info>  [1765015999.9215] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec 06 10:13:19 compute-0 NetworkManager[48882]: <info>  [1765015999.9226] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec 06 10:13:19 compute-0 ovn_controller[152417]: 2025-12-06T10:13:19Z|00114|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec 06 10:13:19 compute-0 nova_compute[254819]: 2025-12-06 10:13:19.955 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:19 compute-0 ovn_controller[152417]: 2025-12-06T10:13:19Z|00115|binding|INFO|Releasing lport 6aa255c1-2a72-4002-8ac0-9542a75d99f5 from this chassis (sb_readonly=0)
Dec 06 10:13:19 compute-0 nova_compute[254819]: 2025-12-06 10:13:19.959 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.241 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.329 254824 DEBUG nova.compute.manager [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.329 254824 DEBUG nova.compute.manager [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:13:20 compute-0 nova_compute[254819]: 2025-12-06 10:13:20.330 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:13:20 compute-0 podman[274147]: 2025-12-06 10:13:20.430798224 +0000 UTC m=+0.056642135 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:13:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a8c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:13:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:13:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:21 compute-0 nova_compute[254819]: 2025-12-06 10:13:21.167 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:21 compute-0 nova_compute[254819]: 2025-12-06 10:13:21.219 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:13:21 compute-0 nova_compute[254819]: 2025-12-06 10:13:21.220 254824 DEBUG nova.network.neutron [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:13:21 compute-0 nova_compute[254819]: 2025-12-06 10:13:21.242 254824 DEBUG oslo_concurrency.lockutils [req-709e3360-4f54-4f28-a4cd-62d5a88e3949 req-30d4c469-f1a6-460c-bb27-5469f8f3c2dd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:13:21 compute-0 ceph-mon[74327]: pgmap v967: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:23 compute-0 ceph-mon[74327]: pgmap v968: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 10:13:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:23.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:13:23
Dec 06 10:13:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:13:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:13:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', '.nfs', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Dec 06 10:13:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:13:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:13:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:13:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:13:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:13:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:25 compute-0 nova_compute[254819]: 2025-12-06 10:13:25.243 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:25 compute-0 ceph-mon[74327]: pgmap v969: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:13:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:25.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:26 compute-0 nova_compute[254819]: 2025-12-06 10:13:26.169 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:13:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:27 compute-0 ceph-mon[74327]: pgmap v970: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:13:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:27.649Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:13:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:27.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:27.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:27 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 10:13:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 10:13:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101328 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:13:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:29.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:29 compute-0 ceph-mon[74327]: pgmap v971: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 10:13:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:29.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:29 compute-0 ovn_controller[152417]: 2025-12-06T10:13:29Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:ab:4e 10.100.0.5
Dec 06 10:13:29 compute-0 ovn_controller[152417]: 2025-12-06T10:13:29Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec 06 10:13:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec 06 10:13:30 compute-0 nova_compute[254819]: 2025-12-06 10:13:30.246 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:13:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Dec 06 10:13:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:31 compute-0 nova_compute[254819]: 2025-12-06 10:13:31.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:31 compute-0 ceph-mon[74327]: pgmap v972: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec 06 10:13:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec 06 10:13:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101333 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:13:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:33 compute-0 ceph-mon[74327]: pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 20 op/s
Dec 06 10:13:33 compute-0 sudo[274180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:13:33 compute-0 sudo[274180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:33 compute-0 sudo[274180]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 10:13:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:35 compute-0 nova_compute[254819]: 2025-12-06 10:13:35.247 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:35 compute-0 ceph-mon[74327]: pgmap v974: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 10:13:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:13:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:13:36 compute-0 nova_compute[254819]: 2025-12-06 10:13:36.216 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:36 compute-0 nova_compute[254819]: 2025-12-06 10:13:36.839 254824 INFO nova.compute.manager [None req-12e67c13-a2bc-4851-b629-051195e0d4aa 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Get console output
Dec 06 10:13:36 compute-0 nova_compute[254819]: 2025-12-06 10:13:36.846 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:13:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:37 compute-0 ceph-mon[74327]: pgmap v975: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:37.651Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:13:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:37.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:13:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:13:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:13:38 compute-0 ovn_controller[152417]: 2025-12-06T10:13:38Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec 06 10:13:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:13:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:39.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:39 compute-0 ceph-mon[74327]: pgmap v976: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:13:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:40 compute-0 nova_compute[254819]: 2025-12-06 10:13:40.251 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:40 compute-0 podman[274214]: 2025-12-06 10:13:40.437728808 +0000 UTC m=+0.065745746 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec 06 10:13:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:13:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:13:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:13:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:13:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:41 compute-0 nova_compute[254819]: 2025-12-06 10:13:41.220 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:41 compute-0 ceph-mon[74327]: pgmap v977: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:41 compute-0 ovn_controller[152417]: 2025-12-06T10:13:41Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:ab:4e 10.100.0.5
Dec 06 10:13:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.009 254824 DEBUG nova.compute.manager [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.009 254824 DEBUG nova.compute.manager [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing instance network info cache due to event network-changed-923b504a-09da-476b-a8c8-c6c76c5e8343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.010 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.010 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.011 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Refreshing network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.123 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.124 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.125 254824 INFO nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Terminating instance
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.126 254824 DEBUG nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:13:42 compute-0 kernel: tap923b504a-09 (unregistering): left promiscuous mode
Dec 06 10:13:42 compute-0 NetworkManager[48882]: <info>  [1765016022.1903] device (tap923b504a-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:13:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:42 compute-0 ovn_controller[152417]: 2025-12-06T10:13:42Z|00116|binding|INFO|Releasing lport 923b504a-09da-476b-a8c8-c6c76c5e8343 from this chassis (sb_readonly=0)
Dec 06 10:13:42 compute-0 ovn_controller[152417]: 2025-12-06T10:13:42Z|00117|binding|INFO|Setting lport 923b504a-09da-476b-a8c8-c6c76c5e8343 down in Southbound
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 ovn_controller[152417]: 2025-12-06T10:13:42Z|00118|binding|INFO|Removing iface tap923b504a-09 ovn-installed in OVS
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.238 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.246 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:ab:4e 10.100.0.5'], port_security=['fa:16:3e:b7:ab:4e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b735e225-377d-4f50-aae2-4bf5dd4eb9fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-565d9ab5-f943-4873-8a20-970fba448d46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07ac2c97-c1ea-402b-a4af-4b99fec7720e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86948586-50a4-4571-ad91-ae78b72ed8de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=923b504a-09da-476b-a8c8-c6c76c5e8343) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.247 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 923b504a-09da-476b-a8c8-c6c76c5e8343 in datapath 565d9ab5-f943-4873-8a20-970fba448d46 unbound from our chassis
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.249 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 565d9ab5-f943-4873-8a20-970fba448d46, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.251 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[639e7267-8362-4c6a-8316-ac7ffdd523eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.252 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 namespace which is not needed anymore
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 06 10:13:42 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Consumed 14.560s CPU time.
Dec 06 10:13:42 compute-0 systemd-machined[216202]: Machine qemu-7-instance-0000000a terminated.
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.359 254824 INFO nova.virt.libvirt.driver [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Instance destroyed successfully.
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.360 254824 DEBUG nova.objects.instance [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid b735e225-377d-4f50-aae2-4bf5dd4eb9fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:13:42 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : haproxy version is 2.8.14-c23fe91
Dec 06 10:13:42 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [NOTICE]   (274131) : path to executable is /usr/sbin/haproxy
Dec 06 10:13:42 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [WARNING]  (274131) : Exiting Master process...
Dec 06 10:13:42 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [ALERT]    (274131) : Current worker (274133) exited with code 143 (Terminated)
Dec 06 10:13:42 compute-0 neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46[274127]: [WARNING]  (274131) : All workers exited. Exiting... (0)
Dec 06 10:13:42 compute-0 systemd[1]: libpod-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope: Deactivated successfully.
Dec 06 10:13:42 compute-0 podman[274261]: 2025-12-06 10:13:42.383548527 +0000 UTC m=+0.044523143 container died 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.392 254824 DEBUG nova.virt.libvirt.vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:13:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-767347043',display_name='tempest-TestNetworkBasicOps-server-767347043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-767347043',id=10,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKo7uC0irjYnKyVEGtEn/nYgythvknyTt45P5kPX1NZlUQ4NHagXOXCZs1+RjUHYK3oEDqvVo3L7WEeQEsh2SWgKD0PXaBMlx1FpXYkm1OxP+oK804aHcHmvv61DYBpjSw==',key_name='tempest-TestNetworkBasicOps-1442962553',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:13:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-rc0ojmmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:13:17Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=b735e225-377d-4f50-aae2-4bf5dd4eb9fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG nova.network.os_vif_util [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG nova.network.os_vif_util [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.393 254824 DEBUG os_vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.395 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.395 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap923b504a-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.396 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.399 254824 INFO os_vif [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:ab:4e,bridge_name='br-int',has_traffic_filtering=True,id=923b504a-09da-476b-a8c8-c6c76c5e8343,network=Network(565d9ab5-f943-4873-8a20-970fba448d46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923b504a-09')
Dec 06 10:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128-userdata-shm.mount: Deactivated successfully.
Dec 06 10:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9928877ca56a0e966d3eea9b89794c7d2e32547dafcfc2eff997d385c12891b6-merged.mount: Deactivated successfully.
Dec 06 10:13:42 compute-0 podman[274261]: 2025-12-06 10:13:42.427356851 +0000 UTC m=+0.088331447 container cleanup 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 10:13:42 compute-0 systemd[1]: libpod-conmon-46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128.scope: Deactivated successfully.
Dec 06 10:13:42 compute-0 podman[274318]: 2025-12-06 10:13:42.492128809 +0000 UTC m=+0.043480775 container remove 46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.499 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9a2bf4-687e-49b9-b738-4ee083d84a5a]: (4, ('Sat Dec  6 10:13:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 (46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128)\n46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128\nSat Dec  6 10:13:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 (46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128)\n46a8115050f94a8e3d27c3de8ef0f2e8245cf9e24d6519fe546e7723bdb02128\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.501 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7914355c-28b1-4b4f-9c84-fc2bc545c569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.502 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap565d9ab5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.504 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 kernel: tap565d9ab5-f0: left promiscuous mode
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.506 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.511 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3f4715-9fef-4b32-ad08-3a20c048bf5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.521 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.527 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[82e13ccd-93bc-47c6-9c22-8f80677fa5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.528 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36ae2f8e-ef07-4bc4-a31d-c321aa986449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.544 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3e4b9-947f-493a-abf2-4c6fc85f38da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436665, 'reachable_time': 39423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274336, 'error': None, 'target': 'ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d565d9ab5\x2df943\x2d4873\x2d8a20\x2d970fba448d46.mount: Deactivated successfully.
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.547 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-565d9ab5-f943-4873-8a20-970fba448d46 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:13:42 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:42.548 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b37365-1fc2-4622-bb90-d0d01bae19c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.577 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.577 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG oslo_concurrency.lockutils [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.578 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.579 254824 DEBUG nova.compute.manager [req-60e225eb-6e90-496a-8cfe-85ae33dd5989 req-3d92d81b-1ad7-416c-b0ef-7769069a7bd4 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-unplugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:13:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.777 254824 INFO nova.virt.libvirt.driver [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deleting instance files /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_del
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.778 254824 INFO nova.virt.libvirt.driver [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deletion of /var/lib/nova/instances/b735e225-377d-4f50-aae2-4bf5dd4eb9fa_del complete
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.837 254824 INFO nova.compute.manager [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 0.71 seconds to destroy the instance on the hypervisor.
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.838 254824 DEBUG oslo.service.loopingcall [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.840 254824 DEBUG nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:13:42 compute-0 nova_compute[254819]: 2025-12-06 10:13:42.840 254824 DEBUG nova.network.neutron [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.108 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updated VIF entry in instance network info cache for port 923b504a-09da-476b-a8c8-c6c76c5e8343. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.109 254824 DEBUG nova.network.neutron [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [{"id": "923b504a-09da-476b-a8c8-c6c76c5e8343", "address": "fa:16:3e:b7:ab:4e", "network": {"id": "565d9ab5-f943-4873-8a20-970fba448d46", "bridge": "br-int", "label": "tempest-network-smoke--340972836", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923b504a-09", "ovs_interfaceid": "923b504a-09da-476b-a8c8-c6c76c5e8343", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.130 254824 DEBUG oslo_concurrency.lockutils [req-04966f8b-ca35-4ed6-a044-a550a28be799 req-d89bc1e2-2552-47a0-9de6-6084cd55538c d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-b735e225-377d-4f50-aae2-4bf5dd4eb9fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:13:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.344 254824 DEBUG nova.network.neutron [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.374 254824 INFO nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Took 0.53 seconds to deallocate network for instance.
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.415 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.416 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.458 254824 DEBUG oslo_concurrency.processutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:43 compute-0 ceph-mon[74327]: pgmap v978: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.788 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:13:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:13:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:13:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410722622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.950 254824 DEBUG oslo_concurrency.processutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.957 254824 DEBUG nova.compute.provider_tree [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:13:43 compute-0 nova_compute[254819]: 2025-12-06 10:13:43.976 254824 DEBUG nova.scheduler.client.report [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.000 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.003 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.064 254824 INFO nova.scheduler.client.report [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance b735e225-377d-4f50-aae2-4bf5dd4eb9fa
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.082 254824 DEBUG nova.compute.manager [req-d04a51d0-a100-4724-a9c2-455b40950721 req-9675eff8-be61-4817-afca-8cf18fbc3746 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-deleted-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.121 254824 DEBUG oslo_concurrency.lockutils [None req-fd8d9698-63db-4258-8b42-2978f0565098 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 06 10:13:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801167825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.540 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:44 compute-0 podman[274383]: 2025-12-06 10:13:44.547377491 +0000 UTC m=+0.167204235 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:13:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2410722622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1801167825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.703 254824 DEBUG nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.704 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.704 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 DEBUG oslo_concurrency.lockutils [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "b735e225-377d-4f50-aae2-4bf5dd4eb9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 DEBUG nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] No waiting events found dispatching network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.705 254824 WARNING nova.compute.manager [req-1a920767-4aec-4e28-9a68-7750c02bd978 req-19415190-1054-452f-bab7-be3f775e0a6e d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Received unexpected event network-vif-plugged-923b504a-09da-476b-a8c8-c6c76c5e8343 for instance with vm_state deleted and task_state None.
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.707 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4476MB free_disk=59.94276428222656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.708 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.764 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.764 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:13:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:44 compute-0 nova_compute[254819]: 2025-12-06 10:13:44.793 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469516180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:45 compute-0 nova_compute[254819]: 2025-12-06 10:13:45.290 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:45 compute-0 nova_compute[254819]: 2025-12-06 10:13:45.296 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:13:45 compute-0 nova_compute[254819]: 2025-12-06 10:13:45.311 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:13:45 compute-0 nova_compute[254819]: 2025-12-06 10:13:45.340 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:13:45 compute-0 nova_compute[254819]: 2025-12-06 10:13:45.340 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:45 compute-0 ceph-mon[74327]: pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 06 10:13:45 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2469516180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:13:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:13:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:13:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:13:45 compute-0 sudo[274436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:45 compute-0 sudo[274436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:45 compute-0 sudo[274436]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:46.013 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:13:46 compute-0 nova_compute[254819]: 2025-12-06 10:13:46.015 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:46 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:46.017 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:13:46 compute-0 sudo[274461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:13:46 compute-0 sudo[274461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:46 compute-0 nova_compute[254819]: 2025-12-06 10:13:46.222 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 19 KiB/s wr, 25 op/s
Dec 06 10:13:46 compute-0 nova_compute[254819]: 2025-12-06 10:13:46.340 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:46 compute-0 nova_compute[254819]: 2025-12-06 10:13:46.341 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:46 compute-0 sudo[274461]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:46 compute-0 sudo[274518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:46 compute-0 sudo[274518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:46 compute-0 sudo[274518]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:46 compute-0 nova_compute[254819]: 2025-12-06 10:13:46.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:46 compute-0 sudo[274543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 06 10:13:46 compute-0 sudo[274543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:13:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3717895380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:13:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:13:47 compute-0 sudo[274543]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 27 op/s
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:13:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:13:47 compute-0 sudo[274586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:47 compute-0 sudo[274586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:47 compute-0 sudo[274586]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:47 compute-0 nova_compute[254819]: 2025-12-06 10:13:47.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:47 compute-0 nova_compute[254819]: 2025-12-06 10:13:47.444 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:47 compute-0 sudo[274611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:13:47 compute-0 sudo[274611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:47 compute-0 nova_compute[254819]: 2025-12-06 10:13:47.531 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:47.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:47 compute-0 nova_compute[254819]: 2025-12-06 10:13:47.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:47.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:47.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:47 compute-0 ceph-mon[74327]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 19 KiB/s wr, 25 op/s
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:13:47 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:13:47 compute-0 podman[274677]: 2025-12-06 10:13:47.965610015 +0000 UTC m=+0.067704019 container create 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:13:48 compute-0 systemd[1]: Started libpod-conmon-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope.
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:47.933575761 +0000 UTC m=+0.035669855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:48.080171639 +0000 UTC m=+0.182265673 container init 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:48.09318573 +0000 UTC m=+0.195279774 container start 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:48.097813016 +0000 UTC m=+0.199907120 container attach 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:13:48 compute-0 sweet_diffie[274693]: 167 167
Dec 06 10:13:48 compute-0 systemd[1]: libpod-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope: Deactivated successfully.
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:48.103734445 +0000 UTC m=+0.205828489 container died 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 10:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9618038b86bac6f984c3cb6df21a54adab478e3266442412b4a9802006b291aa-merged.mount: Deactivated successfully.
Dec 06 10:13:48 compute-0 podman[274677]: 2025-12-06 10:13:48.159693556 +0000 UTC m=+0.261787600 container remove 3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 10:13:48 compute-0 systemd[1]: libpod-conmon-3aa45433c79a93704932b75659597c03e6e75706353b72c745c99ed88177420d.scope: Deactivated successfully.
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.342848451 +0000 UTC m=+0.059292491 container create 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.311807433 +0000 UTC m=+0.028251533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:48 compute-0 systemd[1]: Started libpod-conmon-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope.
Dec 06 10:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.462638536 +0000 UTC m=+0.179082556 container init 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.470836847 +0000 UTC m=+0.187280847 container start 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.47390613 +0000 UTC m=+0.190350200 container attach 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:13:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:48 compute-0 nova_compute[254819]: 2025-12-06 10:13:48.751 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:48 compute-0 nova_compute[254819]: 2025-12-06 10:13:48.755 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:13:48 compute-0 nova_compute[254819]: 2025-12-06 10:13:48.755 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:13:48 compute-0 nova_compute[254819]: 2025-12-06 10:13:48.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:13:48 compute-0 nova_compute[254819]: 2025-12-06 10:13:48.773 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:48 compute-0 trusting_pasteur[274733]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:13:48 compute-0 trusting_pasteur[274733]: --> All data devices are unavailable
Dec 06 10:13:48 compute-0 systemd[1]: libpod-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope: Deactivated successfully.
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.880861818 +0000 UTC m=+0.597305868 container died 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0b37dc18a29588bf234ed9600a530526778458c06080925252782ece7fb7f4-merged.mount: Deactivated successfully.
Dec 06 10:13:48 compute-0 ceph-mon[74327]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 27 op/s
Dec 06 10:13:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4031114062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2281509267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:48 compute-0 podman[274717]: 2025-12-06 10:13:48.935914475 +0000 UTC m=+0.652358475 container remove 5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:13:48 compute-0 systemd[1]: libpod-conmon-5c8135f913b2f9e8e24913cd105f49797a3682638131623af72510d11ee065ec.scope: Deactivated successfully.
Dec 06 10:13:48 compute-0 sudo[274611]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:49.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:49 compute-0 sudo[274760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:49 compute-0 sudo[274760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:49 compute-0 sudo[274760]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:49 compute-0 sudo[274785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:13:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:49 compute-0 sudo[274785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.684114456 +0000 UTC m=+0.058445549 container create 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 10:13:49 compute-0 systemd[1]: Started libpod-conmon-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope.
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.65759522 +0000 UTC m=+0.031926313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:49 compute-0 nova_compute[254819]: 2025-12-06 10:13:49.764 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.791822444 +0000 UTC m=+0.166153587 container init 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.800148649 +0000 UTC m=+0.174479732 container start 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.804540678 +0000 UTC m=+0.178871831 container attach 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 10:13:49 compute-0 happy_wilbur[274870]: 167 167
Dec 06 10:13:49 compute-0 systemd[1]: libpod-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope: Deactivated successfully.
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.807711403 +0000 UTC m=+0.182042496 container died 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 10:13:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb393a135f6f9732f95d8e0a84bac745c67fd4062abeeaa8ed33b22c525b1abd-merged.mount: Deactivated successfully.
Dec 06 10:13:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:49 compute-0 podman[274852]: 2025-12-06 10:13:49.853771357 +0000 UTC m=+0.228102450 container remove 67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_wilbur, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:13:49 compute-0 systemd[1]: libpod-conmon-67642f059f1126fd30a3fe04752162dd2258ae1073d7efc503d99d5514aef06c.scope: Deactivated successfully.
Dec 06 10:13:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:13:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:13:50 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:50.020 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.114682552 +0000 UTC m=+0.059829557 container create d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:13:50 compute-0 systemd[1]: Started libpod-conmon-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope.
Dec 06 10:13:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.097770066 +0000 UTC m=+0.042917091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.199945974 +0000 UTC m=+0.145093039 container init d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.214050515 +0000 UTC m=+0.159197570 container start d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.221030214 +0000 UTC m=+0.166177319 container attach d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:13:50 compute-0 agitated_easley[274910]: {
Dec 06 10:13:50 compute-0 agitated_easley[274910]:     "1": [
Dec 06 10:13:50 compute-0 agitated_easley[274910]:         {
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "devices": [
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "/dev/loop3"
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             ],
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "lv_name": "ceph_lv0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "lv_size": "21470642176",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "name": "ceph_lv0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "tags": {
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.cluster_name": "ceph",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.crush_device_class": "",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.encrypted": "0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.osd_id": "1",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.type": "block",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.vdo": "0",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:                 "ceph.with_tpm": "0"
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             },
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "type": "block",
Dec 06 10:13:50 compute-0 agitated_easley[274910]:             "vg_name": "ceph_vg0"
Dec 06 10:13:50 compute-0 agitated_easley[274910]:         }
Dec 06 10:13:50 compute-0 agitated_easley[274910]:     ]
Dec 06 10:13:50 compute-0 agitated_easley[274910]: }
Dec 06 10:13:50 compute-0 systemd[1]: libpod-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope: Deactivated successfully.
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.565173495 +0000 UTC m=+0.510320550 container died d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd2145fe832f46982822957549f7f616bd84a4005e6b62ff378091f1c19d69b-merged.mount: Deactivated successfully.
Dec 06 10:13:50 compute-0 podman[274894]: 2025-12-06 10:13:50.610190412 +0000 UTC m=+0.555337407 container remove d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:13:50 compute-0 systemd[1]: libpod-conmon-d62356aba93d69f0556537b0c2dce63b8c5b164b9882746077a0966c7451c5ef.scope: Deactivated successfully.
Dec 06 10:13:50 compute-0 sudo[274785]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101350 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:13:50 compute-0 podman[274920]: 2025-12-06 10:13:50.672686489 +0000 UTC m=+0.068417589 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:13:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:50 compute-0 sudo[274948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:50 compute-0 sudo[274948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:50 compute-0 sudo[274948]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:50 compute-0 nova_compute[254819]: 2025-12-06 10:13:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:50 compute-0 nova_compute[254819]: 2025-12-06 10:13:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:13:50 compute-0 sudo[274973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:13:50 compute-0 sudo[274973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:13:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:13:50 compute-0 ceph-mon[74327]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.153159531 +0000 UTC m=+0.037123603 container create e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 10:13:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:51 compute-0 systemd[1]: Started libpod-conmon-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope.
Dec 06 10:13:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:51 compute-0 nova_compute[254819]: 2025-12-06 10:13:51.224 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.137573741 +0000 UTC m=+0.021537843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.233590943 +0000 UTC m=+0.117555025 container init e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.24050737 +0000 UTC m=+0.124471442 container start e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.243573863 +0000 UTC m=+0.127537925 container attach e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:13:51 compute-0 great_cerf[275058]: 167 167
Dec 06 10:13:51 compute-0 systemd[1]: libpod-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope: Deactivated successfully.
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.246568464 +0000 UTC m=+0.130532536 container died e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec 06 10:13:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec 06 10:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ec9be8e12e979d505d1fd1d89cc44bf6532af94bfec497a22d9f70ba1e59666-merged.mount: Deactivated successfully.
Dec 06 10:13:51 compute-0 podman[275041]: 2025-12-06 10:13:51.285225417 +0000 UTC m=+0.169189489 container remove e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:13:51 compute-0 systemd[1]: libpod-conmon-e4cba682263fd8d456ff7e5dc7c93d7a5c5509edfedd4442a277f99603d3603a.scope: Deactivated successfully.
Dec 06 10:13:51 compute-0 podman[275081]: 2025-12-06 10:13:51.433844121 +0000 UTC m=+0.041674707 container create 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:13:51 compute-0 systemd[1]: Started libpod-conmon-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope.
Dec 06 10:13:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:13:51 compute-0 podman[275081]: 2025-12-06 10:13:51.414339054 +0000 UTC m=+0.022169670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:13:51 compute-0 podman[275081]: 2025-12-06 10:13:51.509324478 +0000 UTC m=+0.117155084 container init 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:13:51 compute-0 podman[275081]: 2025-12-06 10:13:51.516321507 +0000 UTC m=+0.124152093 container start 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:13:51 compute-0 podman[275081]: 2025-12-06 10:13:51.520269363 +0000 UTC m=+0.128099979 container attach 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 10:13:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:51.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:51.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2027604282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2774251911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:13:52 compute-0 lvm[275173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:13:52 compute-0 lvm[275173]: VG ceph_vg0 finished
Dec 06 10:13:52 compute-0 amazing_hawking[275097]: {}
Dec 06 10:13:52 compute-0 systemd[1]: libpod-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Deactivated successfully.
Dec 06 10:13:52 compute-0 podman[275081]: 2025-12-06 10:13:52.229793142 +0000 UTC m=+0.837623758 container died 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:13:52 compute-0 systemd[1]: libpod-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Consumed 1.106s CPU time.
Dec 06 10:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf804974685395eedaa06cea982bd65c5e9dade04a472ca9d719a92c293ac27-merged.mount: Deactivated successfully.
Dec 06 10:13:52 compute-0 podman[275081]: 2025-12-06 10:13:52.277624993 +0000 UTC m=+0.885455589 container remove 01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:13:52 compute-0 systemd[1]: libpod-conmon-01f941f5b409ba17a331438c20c5a0a42ae1fe5bb5b6cdfaa837e86ab0513adf.scope: Deactivated successfully.
Dec 06 10:13:52 compute-0 sudo[274973]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:13:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:13:52 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:52 compute-0 nova_compute[254819]: 2025-12-06 10:13:52.399 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:52 compute-0 sudo[275191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:13:52 compute-0 sudo[275191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:52 compute-0 sudo[275191]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:13:52 compute-0 ceph-mon[74327]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 9.7 KiB/s wr, 35 op/s
Dec 06 10:13:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:52 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:13:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec 06 10:13:53 compute-0 sudo[275216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:13:53 compute-0 sudo[275216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:13:53 compute-0 sudo[275216]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:53.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:53.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:13:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.244 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:13:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:13:55 compute-0 ceph-mon[74327]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec 06 10:13:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101355 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:13:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec 06 10:13:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:55.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:55.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:56 compute-0 ceph-mon[74327]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec 06 10:13:56 compute-0 nova_compute[254819]: 2025-12-06 10:13:56.226 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec 06 10:13:57 compute-0 nova_compute[254819]: 2025-12-06 10:13:57.359 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016022.3577473, b735e225-377d-4f50-aae2-4bf5dd4eb9fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:13:57 compute-0 nova_compute[254819]: 2025-12-06 10:13:57.360 254824 INFO nova.compute.manager [-] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] VM Stopped (Lifecycle Event)
Dec 06 10:13:57 compute-0 nova_compute[254819]: 2025-12-06 10:13:57.387 254824 DEBUG nova.compute.manager [None req-68e9d75a-b2d2-4ca4-a44e-3032ba699fcd - - - - - -] [instance: b735e225-377d-4f50-aae2-4bf5dd4eb9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:13:57 compute-0 nova_compute[254819]: 2025-12-06 10:13:57.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:13:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.654Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:13:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.654Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:13:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:57.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:13:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:57.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:58 compute-0 ceph-mon[74327]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.7 KiB/s wr, 10 op/s
Dec 06 10:13:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:13:59.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:13:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:13:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00012a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:13:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.6 KiB/s wr, 10 op/s
Dec 06 10:13:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:13:59.589508379Z level=info msg="Completed cleanup jobs" duration=25.458398ms
Dec 06 10:13:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:13:59.714327438Z level=info msg="Update check succeeded" duration=47.584395ms
Dec 06 10:13:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:13:59.721004899Z level=info msg="Update check succeeded" duration=93.902076ms
Dec 06 10:13:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:13:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:13:59.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:13:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:13:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:13:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:13:59.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:13:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:00 compute-0 ceph-mon[74327]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.6 KiB/s wr, 10 op/s
Dec 06 10:14:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:01 compute-0 nova_compute[254819]: 2025-12-06 10:14:01.228 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec 06 10:14:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:02 compute-0 ceph-mon[74327]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec 06 10:14:02 compute-0 nova_compute[254819]: 2025-12-06 10:14:02.440 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec 06 10:14:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:03.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.219 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.219 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.241 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.333 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.334 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.341 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.342 254824 INFO nova.compute.claims [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:14:04 compute-0 ceph-mon[74327]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.449 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452760283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.914 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.923 254824 DEBUG nova.compute.provider_tree [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.942 254824 DEBUG nova.scheduler.client.report [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.971 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:04 compute-0 nova_compute[254819]: 2025-12-06 10:14:04.972 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.029 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.029 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.048 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.074 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:14:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.192 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.194 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.195 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating image(s)
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.230 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.265 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.296 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.300 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.362 254824 DEBUG nova.policy [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.376 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.376 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.377 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.377 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1452760283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.407 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.413 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.793 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:05.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.886 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.979 254824 DEBUG nova.objects.instance [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.992 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Ensure instance console log exists: /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.993 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:05 compute-0 nova_compute[254819]: 2025-12-06 10:14:05.994 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:06 compute-0 nova_compute[254819]: 2025-12-06 10:14:06.230 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:06 compute-0 ceph-mon[74327]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:14:06 compute-0 nova_compute[254819]: 2025-12-06 10:14:06.466 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Successfully created port: 6848cb43-8472-434b-a796-f96c3ce423e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:14:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.442 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.538 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Successfully updated port: 6848cb43-8472-434b-a796-f96c3ce423e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.567 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.568 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.568 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:14:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:07.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.678 254824 DEBUG nova.compute.manager [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.679 254824 DEBUG nova.compute.manager [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.679 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:07 compute-0 nova_compute[254819]: 2025-12-06 10:14:07.717 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:14:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:08 compute-0 ceph-mon[74327]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.555 254824 DEBUG nova.network.neutron [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.589 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.590 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance network_info: |[{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.591 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.591 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.594 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start _get_guest_xml network_info=[{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.599 254824 WARNING nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.604 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.605 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.614 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.615 254824 DEBUG nova.virt.libvirt.host [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.615 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.616 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.617 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.618 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.619 254824 DEBUG nova.virt.hardware [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:14:08 compute-0 nova_compute[254819]: 2025-12-06 10:14:08.624 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:14:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:09.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:09.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:14:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607869369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.065 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.097 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.102 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:14:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:09 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2607869369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:14:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541364610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.582 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.585 254824 DEBUG nova.virt.libvirt.vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:05Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.586 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.587 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.589 254824 DEBUG nova.objects.instance [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.607 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <uuid>1a910dd4-6c75-4618-8b34-925e2d30f8b9</uuid>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <name>instance-0000000b</name>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-697052485</nova:name>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:14:08</nova:creationTime>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <nova:port uuid="6848cb43-8472-434b-a796-f96c3ce423e2">
Dec 06 10:14:09 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <system>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="serial">1a910dd4-6c75-4618-8b34-925e2d30f8b9</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="uuid">1a910dd4-6c75-4618-8b34-925e2d30f8b9</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </system>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <os>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </os>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <features>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </features>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk">
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </source>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config">
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </source>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:14:09 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:87:47:c3"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <target dev="tap6848cb43-84"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/console.log" append="off"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <video>
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </video>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:14:09 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:14:09 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:14:09 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:14:09 compute-0 nova_compute[254819]: </domain>
Dec 06 10:14:09 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.609 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Preparing to wait for external event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.609 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.610 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.610 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.611 254824 DEBUG nova.virt.libvirt.vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:05Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.611 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.612 254824 DEBUG nova.network.os_vif_util [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.612 254824 DEBUG os_vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.613 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.613 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.614 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.618 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.619 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6848cb43-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.619 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6848cb43-84, col_values=(('external_ids', {'iface-id': '6848cb43-8472-434b-a796-f96c3ce423e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:47:c3', 'vm-uuid': '1a910dd4-6c75-4618-8b34-925e2d30f8b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.621 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:09 compute-0 NetworkManager[48882]: <info>  [1765016049.6221] manager: (tap6848cb43-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.623 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.632 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.633 254824 INFO os_vif [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84')
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.689 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.689 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.690 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:87:47:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.690 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Using config drive
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.715 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.836 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.836 254824 DEBUG nova.network.neutron [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:09.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:09 compute-0 nova_compute[254819]: 2025-12-06 10:14:09.853 254824 DEBUG oslo_concurrency.lockutils [req-3f054586-1d4b-4acf-a6eb-52bc949cb625 req-a0967562-f7eb-4d81-a213-ccec718348e7 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.008 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Creating config drive at /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.013 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpas_8k0d_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.141 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpas_8k0d_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.167 254824 DEBUG nova.storage.rbd_utils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.171 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.343 254824 DEBUG oslo_concurrency.processutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config 1a910dd4-6c75-4618-8b34-925e2d30f8b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.346 254824 INFO nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deleting local config drive /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9/disk.config because it was imported into RBD.
Dec 06 10:14:10 compute-0 kernel: tap6848cb43-84: entered promiscuous mode
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.4227] manager: (tap6848cb43-84): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Dec 06 10:14:10 compute-0 ovn_controller[152417]: 2025-12-06T10:14:10Z|00119|binding|INFO|Claiming lport 6848cb43-8472-434b-a796-f96c3ce423e2 for this chassis.
Dec 06 10:14:10 compute-0 ovn_controller[152417]: 2025-12-06T10:14:10Z|00120|binding|INFO|6848cb43-8472-434b-a796-f96c3ce423e2: Claiming fa:16:3e:87:47:c3 10.100.0.10
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.426 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.433 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.449 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:47:c3 10.100.0.10'], port_security=['fa:16:3e:87:47:c3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1a910dd4-6c75-4618-8b34-925e2d30f8b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1fd56fd-eb5a-422e-9da4-fb641a59e1a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1a37e6e-1014-49d4-9543-ee1567988851, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=6848cb43-8472-434b-a796-f96c3ce423e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.450 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 6848cb43-8472-434b-a796-f96c3ce423e2 in datapath ef8aaff1-03b0-4544-89c9-035c25f01e5c bound to our chassis
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.451 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef8aaff1-03b0-4544-89c9-035c25f01e5c
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.466 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[9493e419-661b-4d97-b540-4a09d35c4311]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.466 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef8aaff1-01 in ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.468 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef8aaff1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.469 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f40bac-9537-4fb8-8573-ae1ea852c9e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ceph-mon[74327]: pgmap v992: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.470 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[09875ad6-fcea-45af-b377-d84bb1fe2579]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/541364610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:10 compute-0 systemd-machined[216202]: New machine qemu-8-instance-0000000b.
Dec 06 10:14:10 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000b.
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.489 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[acf86e69-4ba0-433b-9e83-beb95c085466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 systemd-udevd[275593]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.517 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[92136581-6474-4dd0-8b96-f0260e058950]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.5202] device (tap6848cb43-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.5209] device (tap6848cb43-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:14:10 compute-0 ovn_controller[152417]: 2025-12-06T10:14:10Z|00121|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 ovn-installed in OVS
Dec 06 10:14:10 compute-0 ovn_controller[152417]: 2025-12-06T10:14:10Z|00122|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 up in Southbound
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.541 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.546 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[a960cd7f-27b5-4e26-8e8b-d1e94f7b3954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.5529] manager: (tapef8aaff1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.552 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd098f1-7921-453b-bd26-b969af36c006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 systemd-udevd[275601]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.578 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[5c896a6d-7829-4bf9-86f3-67ab8be74bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.581 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[56b931bf-e0a2-4297-a6ab-14cd46980def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 podman[275584]: 2025-12-06 10:14:10.586178034 +0000 UTC m=+0.091274855 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.6024] device (tapef8aaff1-00): carrier: link connected
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.607 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7d73f-319c-4ac0-b018-1cd0f405b7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.623 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[72e1323f-5a1d-4c81-a5f3-04b1250d946c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef8aaff1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e2:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442108, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275634, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.636 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[49a896bb-ffc9-466a-adca-0648f33a742e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:e290'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442108, 'tstamp': 442108}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275636, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.651 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4b4d0b-6b7e-42ea-99bb-47c39ede224a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef8aaff1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e2:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442108, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275637, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.679 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d8890c-27bc-4234-89dc-eb2a385149ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.732 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d562be83-7ace-41fd-80ab-1da7e4b2f093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.734 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef8aaff1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.734 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.735 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef8aaff1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:10 compute-0 NetworkManager[48882]: <info>  [1765016050.7371] manager: (tapef8aaff1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 06 10:14:10 compute-0 kernel: tapef8aaff1-00: entered promiscuous mode
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.736 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.741 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef8aaff1-00, col_values=(('external_ids', {'iface-id': '6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:10 compute-0 ovn_controller[152417]: 2025-12-06T10:14:10Z|00123|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.745 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.746 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[d993bfdb-e93d-4a3d-8c6d-58d6007c3d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.747 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-ef8aaff1-03b0-4544-89c9-035c25f01e5c
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/ef8aaff1-03b0-4544-89c9-035c25f01e5c.pid.haproxy
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID ef8aaff1-03b0-4544-89c9-035c25f01e5c
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:14:10 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:10.747 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'env', 'PROCESS_TAG=haproxy-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef8aaff1-03b0-4544-89c9-035c25f01e5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:14:10 compute-0 nova_compute[254819]: 2025-12-06 10:14:10.757 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.048 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.048324, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.049 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Started (Lifecycle Event)
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.080 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.084 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.0491526, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.084 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Paused (Lifecycle Event)
Dec 06 10:14:11 compute-0 podman[275711]: 2025-12-06 10:14:11.09045419 +0000 UTC m=+0.046762913 container create 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.104 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.107 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.128 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:14:11 compute-0 systemd[1]: Started libpod-conmon-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope.
Dec 06 10:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:14:11 compute-0 podman[275711]: 2025-12-06 10:14:11.067635804 +0000 UTC m=+0.023944527 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b93e3df8fb7a26445c0dd9f79f250dbd57ab6146ffb6d9a8c76505e995ddf4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:11 compute-0 podman[275711]: 2025-12-06 10:14:11.183702698 +0000 UTC m=+0.140011411 container init 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:11 compute-0 podman[275711]: 2025-12-06 10:14:11.188714823 +0000 UTC m=+0.145023526 container start 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 10:14:11 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : New worker (275732) forked
Dec 06 10:14:11 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : Loading success.
Dec 06 10:14:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG nova.compute.manager [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.397 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.398 254824 DEBUG oslo_concurrency.lockutils [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.398 254824 DEBUG nova.compute.manager [req-9b78dd4a-169a-4ed6-95b1-e6a6ad3c4274 req-0952949f-991f-45b3-a341-258eb4dadc48 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Processing event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.399 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.403 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016051.4029374, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.403 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Resumed (Lifecycle Event)
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.405 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.408 254824 INFO nova.virt.libvirt.driver [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance spawned successfully.
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.408 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.433 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.437 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.438 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.438 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.439 254824 DEBUG nova.virt.libvirt.driver [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.443 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.479 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.505 254824 INFO nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 6.31 seconds to spawn the instance on the hypervisor.
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.506 254824 DEBUG nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.571 254824 INFO nova.compute.manager [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 7.27 seconds to build instance.
Dec 06 10:14:11 compute-0 nova_compute[254819]: 2025-12-06 10:14:11.587 254824 DEBUG oslo_concurrency.lockutils [None req-fedaf260-6f2f-4acc-ab97-ad70fd6fafd1 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:11.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:12 compute-0 ceph-mon[74327]: pgmap v993: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:14:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.513 254824 DEBUG nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.513 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG oslo_concurrency.lockutils [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.514 254824 DEBUG nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:13 compute-0 nova_compute[254819]: 2025-12-06 10:14:13.515 254824 WARNING nova.compute.manager [req-a0f2fe5e-3c63-4d6e-bdb3-c61f698da463 req-afc37890-6cf7-4cab-bf90-726093e26326 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.
Dec 06 10:14:13 compute-0 sudo[275745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:14:13 compute-0 sudo[275745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:13 compute-0 sudo[275745]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:13.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:13.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:14 compute-0 NetworkManager[48882]: <info>  [1765016054.3999] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 06 10:14:14 compute-0 ovn_controller[152417]: 2025-12-06T10:14:14Z|00124|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:14 compute-0 NetworkManager[48882]: <info>  [1765016054.4029] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.452 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:14 compute-0 ovn_controller[152417]: 2025-12-06T10:14:14Z|00125|binding|INFO|Releasing lport 6e1dcf71-e1ba-45b9-bb6f-63d6dce249f2 from this chassis (sb_readonly=0)
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.460 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:14 compute-0 ceph-mon[74327]: pgmap v994: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.622 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.749 254824 DEBUG nova.compute.manager [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.749 254824 DEBUG nova.compute.manager [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:14 compute-0 nova_compute[254819]: 2025-12-06 10:14:14.750 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:15 compute-0 podman[275771]: 2025-12-06 10:14:15.503051362 +0000 UTC m=+0.125780526 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:14:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:15.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:16 compute-0 nova_compute[254819]: 2025-12-06 10:14:16.237 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:16 compute-0 nova_compute[254819]: 2025-12-06 10:14:16.237 254824 DEBUG nova.network.neutron [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:16 compute-0 nova_compute[254819]: 2025-12-06 10:14:16.257 254824 DEBUG oslo_concurrency.lockutils [req-7321cbae-c57f-4422-a97d-760470f150c9 req-0b83b411-725f-4162-8395-e92685ecdacc d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:16 compute-0 nova_compute[254819]: 2025-12-06 10:14:16.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:16 compute-0 ceph-mon[74327]: pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101416 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:14:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:17.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:18 compute-0 ceph-mon[74327]: pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.555714) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058555792, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1326, "num_deletes": 503, "total_data_size": 1887795, "memory_usage": 1910304, "flush_reason": "Manual Compaction"}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058571623, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1843544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28274, "largest_seqno": 29599, "table_properties": {"data_size": 1837641, "index_size": 2723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16377, "raw_average_key_size": 19, "raw_value_size": 1823788, "raw_average_value_size": 2202, "num_data_blocks": 117, "num_entries": 828, "num_filter_entries": 828, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015970, "oldest_key_time": 1765015970, "file_creation_time": 1765016058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 15985 microseconds, and 6072 cpu microseconds.
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.571718) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1843544 bytes OK
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.571759) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574857) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574875) EVENT_LOG_v1 {"time_micros": 1765016058574869, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.574894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1880854, prev total WAL file size 1880854, number of live WAL files 2.
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.575908) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1800KB)], [62(16MB)]
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058575945, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 19368432, "oldest_snapshot_seqno": -1}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5830 keys, 13151138 bytes, temperature: kUnknown
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058660535, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 13151138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13113623, "index_size": 21853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 150706, "raw_average_key_size": 25, "raw_value_size": 13009507, "raw_average_value_size": 2231, "num_data_blocks": 875, "num_entries": 5830, "num_filter_entries": 5830, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.660792) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 13151138 bytes
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.662197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.7 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 16.7 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(17.6) write-amplify(7.1) OK, records in: 6855, records dropped: 1025 output_compression: NoCompression
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.662213) EVENT_LOG_v1 {"time_micros": 1765016058662205, "job": 34, "event": "compaction_finished", "compaction_time_micros": 84703, "compaction_time_cpu_micros": 26381, "output_level": 6, "num_output_files": 1, "total_output_size": 13151138, "num_input_records": 6855, "num_output_records": 5830, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058662576, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016058665150, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.575810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:14:18.665244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:19.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:14:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:19.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:14:19 compute-0 nova_compute[254819]: 2025-12-06 10:14:19.670 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:19.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:20 compute-0 ceph-mon[74327]: pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:14:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:14:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:14:21 compute-0 nova_compute[254819]: 2025-12-06 10:14:21.275 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:21 compute-0 podman[275804]: 2025-12-06 10:14:21.427621269 +0000 UTC m=+0.056445135 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:14:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:21.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:22 compute-0 ceph-mon[74327]: pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:14:22 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1807636740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:14:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:14:23
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.nfs', '.mgr', 'default.rgw.control', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec 06 10:14:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:14:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:14:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:14:24 compute-0 ovn_controller[152417]: 2025-12-06T10:14:24Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:47:c3 10.100.0.10
Dec 06 10:14:24 compute-0 ovn_controller[152417]: 2025-12-06T10:14:24Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:47:c3 10.100.0.10
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:14:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:14:24 compute-0 ceph-mon[74327]: pgmap v999: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:14:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:24 compute-0 nova_compute[254819]: 2025-12-06 10:14:24.672 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4002fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:14:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 06 10:14:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:26 compute-0 nova_compute[254819]: 2025-12-06 10:14:26.319 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:26 compute-0 ceph-mon[74327]: pgmap v1000: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:14:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/955708754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2572115795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:14:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:14:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:27.657Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:28 compute-0 ceph-mon[74327]: pgmap v1001: 337 pgs: 337 active+clean; 134 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 10:14:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 06 10:14:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 06 10:14:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:29.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 683 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 06 10:14:29 compute-0 nova_compute[254819]: 2025-12-06 10:14:29.675 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:30 compute-0 ceph-mon[74327]: pgmap v1002: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 683 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 06 10:14:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Dec 06 10:14:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Dec 06 10:14:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 10:14:31 compute-0 nova_compute[254819]: 2025-12-06 10:14:31.376 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 06 10:14:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:32 compute-0 ceph-mon[74327]: pgmap v1003: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 10:14:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Dec 06 10:14:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:33 compute-0 sudo[275839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:14:33 compute-0 sudo[275839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:33 compute-0 sudo[275839]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:34 compute-0 nova_compute[254819]: 2025-12-06 10:14:34.718 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:34 compute-0 ceph-mon[74327]: pgmap v1004: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Dec 06 10:14:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:36 compute-0 nova_compute[254819]: 2025-12-06 10:14:36.378 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:36 compute-0 ceph-mon[74327]: pgmap v1005: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:37.658Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101438 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 06 10:14:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:38 compute-0 ceph-mon[74327]: pgmap v1006: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:14:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:39.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:39.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:39 compute-0 nova_compute[254819]: 2025-12-06 10:14:39.720 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:40 compute-0 ceph-mon[74327]: pgmap v1007: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 10:14:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:14:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:14:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec 06 10:14:41 compute-0 nova_compute[254819]: 2025-12-06 10:14:41.382 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 compute-0 podman[275870]: 2025-12-06 10:14:41.423923193 +0000 UTC m=+0.059784235 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 06 10:14:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:42 compute-0 ceph-mon[74327]: pgmap v1008: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec 06 10:14:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 10:14:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:44 compute-0 nova_compute[254819]: 2025-12-06 10:14:44.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:44 compute-0 nova_compute[254819]: 2025-12-06 10:14:44.764 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:44 compute-0 ceph-mon[74327]: pgmap v1009: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 10:14:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.794 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.795 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.795 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.796 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:14:45 compute-0 nova_compute[254819]: 2025-12-06 10:14:45.796 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548017511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:46 compute-0 ceph-mon[74327]: pgmap v1010: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.270 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.384 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:46 compute-0 podman[275920]: 2025-12-06 10:14:46.409753073 +0000 UTC m=+0.088055908 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.412 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.413 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.580 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4306MB free_disk=59.89735412597656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.581 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 1a910dd4-6c75-4618-8b34-925e2d30f8b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.655 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:14:46 compute-0 nova_compute[254819]: 2025-12-06 10:14:46.700 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025771904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:47 compute-0 nova_compute[254819]: 2025-12-06 10:14:47.132 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:47 compute-0 nova_compute[254819]: 2025-12-06 10:14:47.138 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:14:47 compute-0 nova_compute[254819]: 2025-12-06 10:14:47.155 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:14:47 compute-0 nova_compute[254819]: 2025-12-06 10:14:47.176 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:14:47 compute-0 nova_compute[254819]: 2025-12-06 10:14:47.177 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3418864736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:14:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3418864736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:14:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2548017511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4025771904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:47.659Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:47.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:47.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:48 compute-0 ceph-mon[74327]: pgmap v1011: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/97620197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c0f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:49.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.178 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.202 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.203 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.203 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:14:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3271139588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.403 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.404 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.404 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.405 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.542 254824 INFO nova.compute.manager [None req-d2fa06a1-9829-407d-8f98-4e0b86cdd372 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.546 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:14:49 compute-0 nova_compute[254819]: 2025-12-06 10:14:49.766 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:50 compute-0 ceph-mon[74327]: pgmap v1012: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c009fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:14:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Dec 06 10:14:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:51 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:51.230 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:51 compute-0 nova_compute[254819]: 2025-12-06 10:14:51.231 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:51 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:51.232 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:14:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:51 compute-0 nova_compute[254819]: 2025-12-06 10:14:51.387 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:51 compute-0 nova_compute[254819]: 2025-12-06 10:14:51.498 254824 DEBUG nova.compute.manager [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:51 compute-0 nova_compute[254819]: 2025-12-06 10:14:51.498 254824 DEBUG nova.compute.manager [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:51 compute-0 nova_compute[254819]: 2025-12-06 10:14:51.499 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:52 compute-0 ceph-mon[74327]: pgmap v1013: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 10:14:52 compute-0 podman[275977]: 2025-12-06 10:14:52.425219495 +0000 UTC m=+0.052055626 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.513 254824 INFO nova.compute.manager [None req-9069f500-368b-4a42-8213-b99b4f718ed7 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.517 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.645 254824 DEBUG nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.645 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.646 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.646 254824 DEBUG oslo_concurrency.lockutils [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.647 254824 DEBUG nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:52 compute-0 nova_compute[254819]: 2025-12-06 10:14:52.647 254824 WARNING nova.compute.manager [req-4e694c51-2e5b-4861-9456-78e74d040c6f req-ea85552f-bf5f-422a-93d1-b732129a13e8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.
Dec 06 10:14:52 compute-0 sudo[275996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:14:52 compute-0 sudo[275996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:52 compute-0 sudo[275996]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:52 compute-0 sudo[276021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:14:52 compute-0 sudo[276021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.100 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.121 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.123 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.124 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.125 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.125 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:14:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:14:53 compute-0 sudo[276021]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 10:14:53 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:53 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:53 compute-0 nova_compute[254819]: 2025-12-06 10:14:53.689 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:53.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:14:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:54 compute-0 sudo[276079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:14:54 compute-0 sudo[276079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:54 compute-0 sudo[276079]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.227 254824 DEBUG nova.compute.manager [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.227 254824 DEBUG nova.compute.manager [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.228 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.245 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:54.246 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:54 compute-0 ceph-mon[74327]: pgmap v1014: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 10:14:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.492 254824 INFO nova.compute.manager [None req-2652a2b8-eb0a-4ac4-af44-f6929c0c85ed 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Get console output
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.499 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:14:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.768 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.791 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.792 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.792 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.793 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.794 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG oslo_concurrency.lockutils [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 DEBUG nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.795 254824 WARNING nova.compute.manager [req-abfbfe25-76dd-46e5-b3e7-124d17094a17 req-6489f3ca-9f56-4e5b-88d5-d36e304eef58 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state active and task_state None.
Dec 06 10:14:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:14:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:14:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.928 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.929 254824 DEBUG nova.network.neutron [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.947 254824 DEBUG oslo_concurrency.lockutils [req-bd14cf63-9bda-4eb8-8751-a3b0c7eda63f req-c3dab556-1552-4fe5-86df-80525c808aba d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.948 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:54 compute-0 nova_compute[254819]: 2025-12-06 10:14:54.948 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:14:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:14:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:14:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 10:14:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3783245906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:14:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:14:56 compute-0 sudo[276106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:14:56 compute-0 sudo[276106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:56 compute-0 sudo[276106]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/844916991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:56 compute-0 sudo[276131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:14:56 compute-0 sudo[276131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:56 compute-0 nova_compute[254819]: 2025-12-06 10:14:56.432 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.795621648 +0000 UTC m=+0.055177350 container create fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 10:14:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:56 compute-0 systemd[1]: Started libpod-conmon-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope.
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.771654872 +0000 UTC m=+0.031210594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:14:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.908544288 +0000 UTC m=+0.168100010 container init fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.917845909 +0000 UTC m=+0.177401611 container start fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.922053993 +0000 UTC m=+0.181609715 container attach fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:14:56 compute-0 flamboyant_gagarin[276215]: 167 167
Dec 06 10:14:56 compute-0 systemd[1]: libpod-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope: Deactivated successfully.
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.927069627 +0000 UTC m=+0.186625349 container died fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 10:14:56 compute-0 ceph-mon[74327]: pgmap v1015: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:14:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/844916991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c125087e2dbae22c157b6a8073fc68365cf51a4f07e308e6796ea093044b1c11-merged.mount: Deactivated successfully.
Dec 06 10:14:56 compute-0 podman[276198]: 2025-12-06 10:14:56.971460486 +0000 UTC m=+0.231016218 container remove fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_gagarin, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:14:56 compute-0 systemd[1]: libpod-conmon-fd2dd6b25e02f815dc2b5122bc8ad2277d39fd0f9f8965b322609d5ddd64922b.scope: Deactivated successfully.
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.191938479 +0000 UTC m=+0.064871972 container create 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:14:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:57 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:14:57.235 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:57 compute-0 systemd[1]: Started libpod-conmon-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope.
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.168673952 +0000 UTC m=+0.041607495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.298590409 +0000 UTC m=+0.171523992 container init 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.312609008 +0000 UTC m=+0.185542541 container start 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.317082489 +0000 UTC m=+0.190015992 container attach 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:14:57 compute-0 nova_compute[254819]: 2025-12-06 10:14:57.393 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:57 compute-0 nova_compute[254819]: 2025-12-06 10:14:57.393 254824 DEBUG nova.network.neutron [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:57 compute-0 nova_compute[254819]: 2025-12-06 10:14:57.414 254824 DEBUG oslo_concurrency.lockutils [req-d829279c-5cca-483a-899b-17f70313ef32 req-939840dc-daa3-448c-938f-8effd1980674 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:57.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:14:57 compute-0 boring_black[276254]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:14:57 compute-0 boring_black[276254]: --> All data devices are unavailable
Dec 06 10:14:57 compute-0 systemd[1]: libpod-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope: Deactivated successfully.
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.75490179 +0000 UTC m=+0.627835303 container died 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-924ae4ea2bd64efd85477eaad569fbf88450b1c9dffb3857926f4fb1336b84c2-merged.mount: Deactivated successfully.
Dec 06 10:14:57 compute-0 podman[276238]: 2025-12-06 10:14:57.79897896 +0000 UTC m=+0.671912463 container remove 87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 10:14:57 compute-0 systemd[1]: libpod-conmon-87824ac45319a583a032745c33496efbb548e01610b3df1c2bd4e473813fa040.scope: Deactivated successfully.
Dec 06 10:14:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:57.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:57 compute-0 sudo[276131]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:57 compute-0 ceph-mon[74327]: pgmap v1016: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 17 KiB/s wr, 1 op/s
Dec 06 10:14:57 compute-0 sudo[276283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:14:57 compute-0 sudo[276283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:57 compute-0 sudo[276283]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:58 compute-0 sudo[276308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:14:58 compute-0 sudo[276308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 23 KiB/s wr, 33 op/s
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.479144554 +0000 UTC m=+0.039742454 container create da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:14:58 compute-0 systemd[1]: Started libpod-conmon-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope.
Dec 06 10:14:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.544611122 +0000 UTC m=+0.105209022 container init da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.55342334 +0000 UTC m=+0.114021270 container start da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.557760767 +0000 UTC m=+0.118358667 container attach da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.462039872 +0000 UTC m=+0.022637792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:14:58 compute-0 objective_colden[276388]: 167 167
Dec 06 10:14:58 compute-0 systemd[1]: libpod-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope: Deactivated successfully.
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.560900722 +0000 UTC m=+0.121498622 container died da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e5c1f2a26ee874f0d6580fb7f0979ad20c13b2923b0b6db5ef3571ee1168769-merged.mount: Deactivated successfully.
Dec 06 10:14:58 compute-0 podman[276372]: 2025-12-06 10:14:58.602858585 +0000 UTC m=+0.163456485 container remove da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:14:58 compute-0 systemd[1]: libpod-conmon-da5952e5c82e96d70c3935135c858a8c7e9856a3877ce2e6a30ea528f9cdad41.scope: Deactivated successfully.
Dec 06 10:14:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:58 compute-0 podman[276412]: 2025-12-06 10:14:58.84010707 +0000 UTC m=+0.072133278 container create 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:14:58 compute-0 systemd[1]: Started libpod-conmon-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope.
Dec 06 10:14:58 compute-0 podman[276412]: 2025-12-06 10:14:58.815893467 +0000 UTC m=+0.047919695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:14:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:58 compute-0 podman[276412]: 2025-12-06 10:14:58.946179484 +0000 UTC m=+0.178205712 container init 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 10:14:58 compute-0 podman[276412]: 2025-12-06 10:14:58.953426801 +0000 UTC m=+0.185453049 container start 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 10:14:58 compute-0 podman[276412]: 2025-12-06 10:14:58.958087826 +0000 UTC m=+0.190114074 container attach 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:14:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2727823202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:14:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:59.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:14:59.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:14:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:14:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]: {
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:     "1": [
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:         {
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "devices": [
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "/dev/loop3"
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             ],
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "lv_name": "ceph_lv0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "lv_size": "21470642176",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "name": "ceph_lv0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "tags": {
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.cluster_name": "ceph",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.crush_device_class": "",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.encrypted": "0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.osd_id": "1",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.type": "block",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.vdo": "0",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:                 "ceph.with_tpm": "0"
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             },
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "type": "block",
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:             "vg_name": "ceph_vg0"
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:         }
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]:     ]
Dec 06 10:14:59 compute-0 hungry_heisenberg[276428]: }
Dec 06 10:14:59 compute-0 systemd[1]: libpod-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope: Deactivated successfully.
Dec 06 10:14:59 compute-0 podman[276412]: 2025-12-06 10:14:59.284381406 +0000 UTC m=+0.516407614 container died 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 06 10:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf29506dd50fac301147c3ccb1745b701d3c17e720261d898d7d7bdcb228d2ef-merged.mount: Deactivated successfully.
Dec 06 10:14:59 compute-0 podman[276412]: 2025-12-06 10:14:59.323945394 +0000 UTC m=+0.555971602 container remove 83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:14:59 compute-0 systemd[1]: libpod-conmon-83fe066468aa88b505bd334533ecfc8c18f970dbd077b86b5512c7b7065e86f1.scope: Deactivated successfully.
Dec 06 10:14:59 compute-0 sudo[276308]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:59 compute-0 sudo[276447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:14:59 compute-0 sudo[276447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:59 compute-0 sudo[276447]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:59 compute-0 sudo[276472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:14:59 compute-0 sudo[276472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:14:59 compute-0 nova_compute[254819]: 2025-12-06 10:14:59.772 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:14:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:14:59.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:14:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:14:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:14:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:14:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:14:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:14:59 compute-0 ceph-mon[74327]: pgmap v1017: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 23 KiB/s wr, 33 op/s
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.02564098 +0000 UTC m=+0.051525132 container create b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 10:15:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec 06 10:15:00 compute-0 systemd[1]: Started libpod-conmon-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope.
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.003473562 +0000 UTC m=+0.029357724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.133892993 +0000 UTC m=+0.159777125 container init b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.142088465 +0000 UTC m=+0.167972587 container start b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:15:00 compute-0 magical_stonebraker[276555]: 167 167
Dec 06 10:15:00 compute-0 systemd[1]: libpod-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope: Deactivated successfully.
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.150774929 +0000 UTC m=+0.176659331 container attach b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.151621162 +0000 UTC m=+0.177505274 container died b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1063a60207d01401fadb09e49ed08c2dd1f07bc63cdbe5a48cdba28df22d358a-merged.mount: Deactivated successfully.
Dec 06 10:15:00 compute-0 podman[276541]: 2025-12-06 10:15:00.203045301 +0000 UTC m=+0.228929423 container remove b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:15:00 compute-0 systemd[1]: libpod-conmon-b5336e073eb89594f7482efed0f2c2d36893257fea0dae630315dd6f53e8370f.scope: Deactivated successfully.
Dec 06 10:15:00 compute-0 podman[276583]: 2025-12-06 10:15:00.399745882 +0000 UTC m=+0.049286192 container create 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 10:15:00 compute-0 systemd[1]: Started libpod-conmon-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope.
Dec 06 10:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:00 compute-0 podman[276583]: 2025-12-06 10:15:00.377710407 +0000 UTC m=+0.027250807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:15:00 compute-0 podman[276583]: 2025-12-06 10:15:00.476669828 +0000 UTC m=+0.126210138 container init 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:00 compute-0 podman[276583]: 2025-12-06 10:15:00.489354191 +0000 UTC m=+0.138894531 container start 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:15:00 compute-0 podman[276583]: 2025-12-06 10:15:00.494803939 +0000 UTC m=+0.144344379 container attach 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:15:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:15:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Dec 06 10:15:01 compute-0 lvm[276674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:15:01 compute-0 lvm[276674]: VG ceph_vg0 finished
Dec 06 10:15:01 compute-0 infallible_lalande[276599]: {}
Dec 06 10:15:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:01 compute-0 systemd[1]: libpod-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Deactivated successfully.
Dec 06 10:15:01 compute-0 systemd[1]: libpod-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Consumed 1.176s CPU time.
Dec 06 10:15:01 compute-0 podman[276583]: 2025-12-06 10:15:01.233648478 +0000 UTC m=+0.883188788 container died 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a38575c2dc1c39ac28f5ba8880f9b63fa5f09b3cdf9a5f242593cc699a3f0ed5-merged.mount: Deactivated successfully.
Dec 06 10:15:01 compute-0 podman[276583]: 2025-12-06 10:15:01.286500795 +0000 UTC m=+0.936041095 container remove 4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:15:01 compute-0 systemd[1]: libpod-conmon-4cfc60a7b425fe8c733f3aad28b97bf130c278ed33bdec476182949260e427fe.scope: Deactivated successfully.
Dec 06 10:15:01 compute-0 sudo[276472]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:15:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:15:01 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:15:01 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:15:01 compute-0 sudo[276690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:15:01 compute-0 nova_compute[254819]: 2025-12-06 10:15:01.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:01 compute-0 sudo[276690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:15:01 compute-0 sudo[276690]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:01.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:01 compute-0 ceph-mon[74327]: pgmap v1018: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec 06 10:15:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:15:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:15:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec 06 10:15:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG nova.compute.manager [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG nova.compute.manager [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing instance network info cache due to event network-changed-6848cb43-8472-434b-a796-f96c3ce423e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.065 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.066 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.066 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Refreshing network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.182 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.182 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.183 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.184 254824 INFO nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Terminating instance
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.185 254824 DEBUG nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:15:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:03 compute-0 kernel: tap6848cb43-84 (unregistering): left promiscuous mode
Dec 06 10:15:03 compute-0 NetworkManager[48882]: <info>  [1765016103.2384] device (tap6848cb43-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.252 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 ovn_controller[152417]: 2025-12-06T10:15:03Z|00126|binding|INFO|Releasing lport 6848cb43-8472-434b-a796-f96c3ce423e2 from this chassis (sb_readonly=0)
Dec 06 10:15:03 compute-0 ovn_controller[152417]: 2025-12-06T10:15:03Z|00127|binding|INFO|Setting lport 6848cb43-8472-434b-a796-f96c3ce423e2 down in Southbound
Dec 06 10:15:03 compute-0 ovn_controller[152417]: 2025-12-06T10:15:03Z|00128|binding|INFO|Removing iface tap6848cb43-84 ovn-installed in OVS
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.260 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:47:c3 10.100.0.10'], port_security=['fa:16:3e:87:47:c3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1a910dd4-6c75-4618-8b34-925e2d30f8b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b1fd56fd-eb5a-422e-9da4-fb641a59e1a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1a37e6e-1014-49d4-9543-ee1567988851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=6848cb43-8472-434b-a796-f96c3ce423e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.261 162267 INFO neutron.agent.ovn.metadata.agent [-] Port 6848cb43-8472-434b-a796-f96c3ce423e2 in datapath ef8aaff1-03b0-4544-89c9-035c25f01e5c unbound from our chassis
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.262 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef8aaff1-03b0-4544-89c9-035c25f01e5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.263 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b0f03a-4623-47d0-8f41-96c8efb03a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.264 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c namespace which is not needed anymore
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.273 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 06 10:15:03 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Consumed 15.641s CPU time.
Dec 06 10:15:03 compute-0 systemd-machined[216202]: Machine qemu-8-instance-0000000b terminated.
Dec 06 10:15:03 compute-0 NetworkManager[48882]: <info>  [1765016103.4064] manager: (tap6848cb43-84): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.408 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : haproxy version is 2.8.14-c23fe91
Dec 06 10:15:03 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [NOTICE]   (275730) : path to executable is /usr/sbin/haproxy
Dec 06 10:15:03 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [WARNING]  (275730) : Exiting Master process...
Dec 06 10:15:03 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [ALERT]    (275730) : Current worker (275732) exited with code 143 (Terminated)
Dec 06 10:15:03 compute-0 neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c[275726]: [WARNING]  (275730) : All workers exited. Exiting... (0)
Dec 06 10:15:03 compute-0 systemd[1]: libpod-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope: Deactivated successfully.
Dec 06 10:15:03 compute-0 podman[276739]: 2025-12-06 10:15:03.429248619 +0000 UTC m=+0.057210936 container died 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.429 254824 INFO nova.virt.libvirt.driver [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Instance destroyed successfully.
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.430 254824 DEBUG nova.objects.instance [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 1a910dd4-6c75-4618-8b34-925e2d30f8b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.450 254824 DEBUG nova.virt.libvirt.vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697052485',display_name='tempest-TestNetworkBasicOps-server-697052485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697052485',id=11,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAuhYKdKN9EDS1I/XZyg4WhafMZhuRCMz5uAEJQd26Rxd5WVAmZGHQIQO5WPFhGxsnRcRB0qgDKQ8dvJeA5b8MtdKHCXg8WKkLdZila9zexViJRw9mwokE7iqisT3z+5Ig==',key_name='tempest-TestNetworkBasicOps-1780141244',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:14:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-9i00mr91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:14:11Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=1a910dd4-6c75-4618-8b34-925e2d30f8b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG nova.network.os_vif_util [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG nova.network.os_vif_util [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.451 254824 DEBUG os_vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.453 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.453 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6848cb43-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.455 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.460 254824 INFO os_vif [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:47:c3,bridge_name='br-int',has_traffic_filtering=True,id=6848cb43-8472-434b-a796-f96c3ce423e2,network=Network(ef8aaff1-03b0-4544-89c9-035c25f01e5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6848cb43-84')
Dec 06 10:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b93e3df8fb7a26445c0dd9f79f250dbd57ab6146ffb6d9a8c76505e995ddf4d-merged.mount: Deactivated successfully.
Dec 06 10:15:03 compute-0 podman[276739]: 2025-12-06 10:15:03.476289679 +0000 UTC m=+0.104251996 container cleanup 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:15:03 compute-0 systemd[1]: libpod-conmon-33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d.scope: Deactivated successfully.
Dec 06 10:15:03 compute-0 podman[276795]: 2025-12-06 10:15:03.550790711 +0000 UTC m=+0.045854250 container remove 33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.559 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6552a6-1729-4df3-9779-3b8fd01e528d]: (4, ('Sat Dec  6 10:15:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c (33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d)\n33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d\nSat Dec  6 10:15:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c (33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d)\n33b0ce662b23b6f98eab1f6b3386675cb46da27404914ee64922339023b1534d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.562 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[31c69eb4-c2eb-4b44-b505-3d7a74c441f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.563 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef8aaff1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 kernel: tapef8aaff1-00: left promiscuous mode
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.580 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.584 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[44939c2c-a223-4707-861b-6323e43da863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.604 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[168107c5-1feb-40d4-8902-117d78b0e3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.605 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3f08da-4125-4931-aebb-03960386b98e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.623 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[4d97f407-eff2-4a5d-b497-db4ff09bf242]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442102, 'reachable_time': 33883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276813, 'error': None, 'target': 'ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 systemd[1]: run-netns-ovnmeta\x2def8aaff1\x2d03b0\x2d4544\x2d89c9\x2d035c25f01e5c.mount: Deactivated successfully.
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.627 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef8aaff1-03b0-4544-89c9-035c25f01e5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:15:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:03.627 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[9075aaa6-c613-42ae-bcbb-bc1ff3a37079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:03.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.868 254824 INFO nova.virt.libvirt.driver [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deleting instance files /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9_del
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.869 254824 INFO nova.virt.libvirt.driver [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deletion of /var/lib/nova/instances/1a910dd4-6c75-4618-8b34-925e2d30f8b9_del complete
Dec 06 10:15:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:03.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.956 254824 INFO nova.compute.manager [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG oslo.service.loopingcall [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:15:03 compute-0 nova_compute[254819]: 2025-12-06 10:15:03.957 254824 DEBUG nova.network.neutron [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:15:04 compute-0 ceph-mon[74327]: pgmap v1019: 337 pgs: 337 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 9.1 KiB/s wr, 32 op/s
Dec 06 10:15:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.476 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updated VIF entry in instance network info cache for port 6848cb43-8472-434b-a796-f96c3ce423e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.477 254824 DEBUG nova.network.neutron [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [{"id": "6848cb43-8472-434b-a796-f96c3ce423e2", "address": "fa:16:3e:87:47:c3", "network": {"id": "ef8aaff1-03b0-4544-89c9-035c25f01e5c", "bridge": "br-int", "label": "tempest-network-smoke--1887948682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6848cb43-84", "ovs_interfaceid": "6848cb43-8472-434b-a796-f96c3ce423e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.494 254824 DEBUG oslo_concurrency.lockutils [req-f088d230-5aa9-4f28-aac3-143a47f559f3 req-16db7389-3d46-4a30-afe5-5f55eeb97df8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-1a910dd4-6c75-4618-8b34-925e2d30f8b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.540 254824 DEBUG nova.network.neutron [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.558 254824 INFO nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Took 0.60 seconds to deallocate network for instance.
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.630 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.631 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:04 compute-0 nova_compute[254819]: 2025-12-06 10:15:04.670 254824 DEBUG oslo_concurrency.processutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815512636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.118 254824 DEBUG oslo_concurrency.processutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.124 254824 DEBUG nova.compute.provider_tree [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.139 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.140 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.140 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 WARNING nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-unplugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state deleted and task_state None.
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.141 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.142 254824 DEBUG oslo_concurrency.lockutils [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] No waiting events found dispatching network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 WARNING nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received unexpected event network-vif-plugged-6848cb43-8472-434b-a796-f96c3ce423e2 for instance with vm_state deleted and task_state None.
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.143 254824 DEBUG nova.compute.manager [req-083e97f9-c49a-4fc1-bd5e-de355e739a62 req-3d2979b9-f084-4fc2-902f-dffa8553c607 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Received event network-vif-deleted-6848cb43-8472-434b-a796-f96c3ce423e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.145 254824 DEBUG nova.scheduler.client.report [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.163 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.190 254824 INFO nova.scheduler.client.report [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 1a910dd4-6c75-4618-8b34-925e2d30f8b9
Dec 06 10:15:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:05 compute-0 nova_compute[254819]: 2025-12-06 10:15:05.242 254824 DEBUG oslo_concurrency.lockutils [None req-49d04a54-3fa6-45b2-a769-0492e9b7d6a6 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1a910dd4-6c75-4618-8b34-925e2d30f8b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:06 compute-0 ceph-mon[74327]: pgmap v1020: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec 06 10:15:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3815512636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec 06 10:15:06 compute-0 nova_compute[254819]: 2025-12-06 10:15:06.492 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:07 compute-0 ceph-mon[74327]: pgmap v1021: 337 pgs: 337 active+clean; 121 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.2 KiB/s wr, 32 op/s
Dec 06 10:15:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:07.661Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:15:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:07.661Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:15:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:07.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 7.7 KiB/s wr, 56 op/s
Dec 06 10:15:08 compute-0 nova_compute[254819]: 2025-12-06 10:15:08.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=infra.usagestats t=2025-12-06T10:15:08.588248076Z level=info msg="Usage stats are ready to report"
Dec 06 10:15:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:15:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:09.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:09 compute-0 ceph-mon[74327]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 7.7 KiB/s wr, 56 op/s
Dec 06 10:15:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:09.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:15:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:15:11 compute-0 ceph-mon[74327]: pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:11 compute-0 nova_compute[254819]: 2025-12-06 10:15:11.540 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:11.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:12 compute-0 nova_compute[254819]: 2025-12-06 10:15:12.436 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:12 compute-0 podman[276847]: 2025-12-06 10:15:12.439694616 +0000 UTC m=+0.071607745 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:15:12 compute-0 nova_compute[254819]: 2025-12-06 10:15:12.519 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a0e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:13 compute-0 ceph-mon[74327]: pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:13 compute-0 nova_compute[254819]: 2025-12-06 10:15:13.460 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:13.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:14 compute-0 sudo[276870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:15:14 compute-0 sudo[276870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:15:14 compute-0 sudo[276870]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4002900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:15 compute-0 ceph-mon[74327]: pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 10:15:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:15.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:15:16 compute-0 nova_compute[254819]: 2025-12-06 10:15:16.541 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40043a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:17 compute-0 ceph-mon[74327]: pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 10:15:17 compute-0 podman[276897]: 2025-12-06 10:15:17.511497539 +0000 UTC m=+0.142429266 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:17.662Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:15:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:17.665Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:15:18 compute-0 nova_compute[254819]: 2025-12-06 10:15:18.424 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016103.4228678, 1a910dd4-6c75-4618-8b34-925e2d30f8b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:18 compute-0 nova_compute[254819]: 2025-12-06 10:15:18.424 254824 INFO nova.compute.manager [-] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] VM Stopped (Lifecycle Event)
Dec 06 10:15:18 compute-0 nova_compute[254819]: 2025-12-06 10:15:18.447 254824 DEBUG nova.compute.manager [None req-42acda88-9785-4123-b4c2-f84a2d40a264 - - - - - -] [instance: 1a910dd4-6c75-4618-8b34-925e2d30f8b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:18 compute-0 nova_compute[254819]: 2025-12-06 10:15:18.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a120 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:19.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:19 compute-0 ceph-mon[74327]: pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:15:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:19.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:19.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e40043a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:15:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:15:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:21 compute-0 ceph-mon[74327]: pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:21 compute-0 nova_compute[254819]: 2025-12-06 10:15:21.542 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:21.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:23 compute-0 podman[276930]: 2025-12-06 10:15:23.435555511 +0000 UTC m=+0.067896804 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:15:23 compute-0 nova_compute[254819]: 2025-12-06 10:15:23.466 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:23 compute-0 ceph-mon[74327]: pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:15:23
Dec 06 10:15:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:15:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:15:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.nfs', 'default.rgw.control', 'backups']
Dec 06 10:15:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:15:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:15:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:15:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:15:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:25 compute-0 ceph-mon[74327]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 06 10:15:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:26 compute-0 nova_compute[254819]: 2025-12-06 10:15:26.544 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:27.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:27 compute-0 ceph-mon[74327]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.469 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.513 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.513 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.532 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.625 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.626 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.631 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.632 254824 INFO nova.compute.claims [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 10:15:28 compute-0 nova_compute[254819]: 2025-12-06 10:15:28.739 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:29.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798072465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.174 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.179 254824 DEBUG nova.compute.provider_tree [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.195 254824 DEBUG nova.scheduler.client.report [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.215 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.215 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:15:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.257 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.257 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.277 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.293 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.386 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.388 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.388 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating image(s)
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.414 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.439 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.465 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.469 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.498 254824 DEBUG nova.policy [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03615580775245e6ae335ee9d785611f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.523 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.524 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "1b7208203e670301d076a006cb3364d3eb842050" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.524 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.525 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "1b7208203e670301d076a006cb3364d3eb842050" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.557 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.560 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.811 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1b7208203e670301d076a006cb3364d3eb842050 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:29 compute-0 ceph-mon[74327]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:15:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3798072465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:29.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:29 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.882 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] resizing rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:15:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:29.999 254824 DEBUG nova.objects.instance [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'migration_context' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.023 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.023 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Ensure instance console log exists: /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.024 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:30 compute-0 nova_compute[254819]: 2025-12-06 10:15:30.507 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Successfully created port: ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:15:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0001d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:15:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:15:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.272 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Successfully updated port: ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.289 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.289 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.290 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.397 254824 DEBUG nova.compute.manager [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.398 254824 DEBUG nova.compute.manager [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.398 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:31 compute-0 nova_compute[254819]: 2025-12-06 10:15:31.545 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:15:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:15:31 compute-0 ceph-mon[74327]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:32 compute-0 nova_compute[254819]: 2025-12-06 10:15:32.405 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:15:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.341 254824 DEBUG nova.network.neutron [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.370 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.371 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance network_info: |[{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.372 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.372 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.377 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start _get_guest_xml network_info=[{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'boot_index': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '9489b8a5-a798-4e26-87f9-59bb1eb2e6fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.383 254824 WARNING nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.387 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.387 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.390 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.392 254824 DEBUG nova.virt.libvirt.host [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.392 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.393 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:04:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0a252b9c-cc5f-41b2-a8b2-94fcf6e74d22',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:04:42Z,direct_url=<?>,disk_format='qcow2',id=9489b8a5-a798-4e26-87f9-59bb1eb2e6fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3e0ab101ca7547d4a515169a0f2edef3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.394 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.394 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.395 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.396 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.396 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.397 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.397 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.398 254824 DEBUG nova.virt.hardware [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.402 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:15:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/125323094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:15:33 compute-0 ceph-mon[74327]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:15:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/125323094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.905 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.932 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:33 compute-0 nova_compute[254819]: 2025-12-06 10:15:33.936 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:15:34 compute-0 sudo[277212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:15:34 compute-0 sudo[277212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:15:34 compute-0 sudo[277212]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:15:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174551433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.381 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.384 254824 DEBUG nova.virt.libvirt.vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:29Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.385 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.385 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.387 254824 DEBUG nova.objects.instance [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.405 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <uuid>7ebb0f0e-b16a-451f-b85a-623f5bcf704f</uuid>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <name>instance-0000000d</name>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <memory>131072</memory>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <vcpu>1</vcpu>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <metadata>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:name>tempest-TestNetworkBasicOps-server-1823850228</nova:name>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:creationTime>2025-12-06 10:15:33</nova:creationTime>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:flavor name="m1.nano">
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:memory>128</nova:memory>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:disk>1</nova:disk>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:swap>0</nova:swap>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </nova:flavor>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:owner>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:user uuid="03615580775245e6ae335ee9d785611f">tempest-TestNetworkBasicOps-1971100882-project-member</nova:user>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:project uuid="92b402c8d3e2476abc98be42a1e6d34e">tempest-TestNetworkBasicOps-1971100882</nova:project>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </nova:owner>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:root type="image" uuid="9489b8a5-a798-4e26-87f9-59bb1eb2e6fd"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <nova:ports>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <nova:port uuid="ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd">
Dec 06 10:15:34 compute-0 nova_compute[254819]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         </nova:port>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </nova:ports>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </nova:instance>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </metadata>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <sysinfo type="smbios">
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <system>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="serial">7ebb0f0e-b16a-451f-b85a-623f5bcf704f</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="uuid">7ebb0f0e-b16a-451f-b85a-623f5bcf704f</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </system>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </sysinfo>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <os>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <boot dev="hd"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <smbios mode="sysinfo"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </os>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <features>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <acpi/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <apic/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <vmcoreinfo/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </features>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <clock offset="utc">
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <timer name="hpet" present="no"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </clock>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <cpu mode="host-model" match="exact">
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </cpu>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   <devices>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <disk type="network" device="disk">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk">
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </source>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <target dev="vda" bus="virtio"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <disk type="network" device="cdrom">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <driver type="raw" cache="none"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <source protocol="rbd" name="vms/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config">
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.100" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.102" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <host name="192.168.122.101" port="6789"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </source>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <auth username="openstack">
Dec 06 10:15:34 compute-0 nova_compute[254819]:         <secret type="ceph" uuid="5ecd3f74-dade-5fc4-92ce-8950ae424258"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       </auth>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <target dev="sda" bus="sata"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </disk>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <interface type="ethernet">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <mac address="fa:16:3e:21:72:5e"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <mtu size="1442"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <target dev="tapea0f2c61-7d"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </interface>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <serial type="pty">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <log file="/var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/console.log" append="off"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </serial>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <video>
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <model type="virtio"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </video>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <input type="tablet" bus="usb"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <rng model="virtio">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </rng>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <controller type="usb" index="0"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     <memballoon model="virtio">
Dec 06 10:15:34 compute-0 nova_compute[254819]:       <stats period="10"/>
Dec 06 10:15:34 compute-0 nova_compute[254819]:     </memballoon>
Dec 06 10:15:34 compute-0 nova_compute[254819]:   </devices>
Dec 06 10:15:34 compute-0 nova_compute[254819]: </domain>
Dec 06 10:15:34 compute-0 nova_compute[254819]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Preparing to wait for external event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.407 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.408 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.408 254824 DEBUG nova.virt.libvirt.vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:29Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG nova.network.os_vif_util [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.409 254824 DEBUG os_vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.410 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.410 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.411 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.413 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.413 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea0f2c61-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.414 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea0f2c61-7d, col_values=(('external_ids', {'iface-id': 'ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:72:5e', 'vm-uuid': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:34 compute-0 NetworkManager[48882]: <info>  [1765016134.4161] manager: (tapea0f2c61-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.417 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.421 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.422 254824 INFO os_vif [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d')
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.480 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] No VIF found with MAC fa:16:3e:21:72:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.481 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Using config drive
Dec 06 10:15:34 compute-0 nova_compute[254819]: 2025-12-06 10:15:34.508 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/174551433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.096 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Creating config drive at /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.101 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppda7q39t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.126 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.127 254824 DEBUG nova.network.neutron [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.155 254824 DEBUG oslo_concurrency.lockutils [req-c46b9601-ac02-4b8f-986a-ed6084fe11c2 req-54db482d-dd4d-4536-ab4e-3605a18d79a8 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.227 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppda7q39t" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.256 254824 DEBUG nova.storage.rbd_utils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] rbd image 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.260 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.404 254824 DEBUG oslo_concurrency.processutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config 7ebb0f0e-b16a-451f-b85a-623f5bcf704f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.405 254824 INFO nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deleting local config drive /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f/disk.config because it was imported into RBD.
Dec 06 10:15:35 compute-0 kernel: tapea0f2c61-7d: entered promiscuous mode
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.4576] manager: (tapea0f2c61-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.458 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_controller[152417]: 2025-12-06T10:15:35Z|00129|binding|INFO|Claiming lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for this chassis.
Dec 06 10:15:35 compute-0 ovn_controller[152417]: 2025-12-06T10:15:35Z|00130|binding|INFO|ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd: Claiming fa:16:3e:21:72:5e 10.100.0.7
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.462 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.465 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.472 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:72:5e 10.100.0.7'], port_security=['fa:16:3e:21:72:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d5ca921-3bfd-449d-8b5d-30ae22ce26cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81c5896f-af1e-41c2-8dce-fe719e73d950, chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.474 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd in datapath f19a420c-d088-44ba-92a5-ba4d8025ce6c bound to our chassis
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.475 162267 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f19a420c-d088-44ba-92a5-ba4d8025ce6c
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.487 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[35f13e4c-5465-475d-92be-ab3ea9b13796]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.488 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf19a420c-d1 in ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.490 260126 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf19a420c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.490 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2e1bb9-af8d-4e6a-8a47-10668dc3fec7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.491 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[36a8d2a9-2d79-4ea8-9c19-cdc1ca499874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 systemd-machined[216202]: New machine qemu-9-instance-0000000d.
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.500 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[5d886b1d-8e9a-4851-a572-a13f7c7a0795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.525 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3337d-531c-43de-8b1f-8e13eb9c5307]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000d.
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.530 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_controller[152417]: 2025-12-06T10:15:35Z|00131|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd ovn-installed in OVS
Dec 06 10:15:35 compute-0 ovn_controller[152417]: 2025-12-06T10:15:35Z|00132|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd up in Southbound
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.536 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 systemd-udevd[277317]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.5562] device (tapea0f2c61-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.5573] device (tapea0f2c61-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.557 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2080d0-e5d5-4214-8945-1e1ce5e51fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.5649] manager: (tapf19a420c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.563 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[1f45c264-4908-46e4-856a-f264f8c0f18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.590 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[7315ab38-26b7-493a-af90-b429b829aaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.594 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[85bb3ea4-0944-447a-b0ba-7feeb568e582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.6191] device (tapf19a420c-d0): carrier: link connected
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.623 260145 DEBUG oslo.privsep.daemon [-] privsep: reply[3dded76e-1cfc-44aa-a06e-e5fc0a04147d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.638 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[752976eb-84c3-453a-9e78-c77a50c6e40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19a420c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:ae:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450610, 'reachable_time': 17981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277348, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.663 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ada4269-0c9e-41a4-92c2-5b6ff2c0aa51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:ae99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450610, 'tstamp': 450610}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277349, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.683 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3932cc-64e9-4301-970c-8afbee4c60fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19a420c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:ae:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450610, 'reachable_time': 17981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277352, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.710 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac0a40e-3178-4864-9a19-1127f02e32af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.776 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bae67d47-9bf7-42d8-97cd-5441d5d7b98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.777 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19a420c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.777 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.778 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf19a420c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:35 compute-0 kernel: tapf19a420c-d0: entered promiscuous mode
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.779 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 NetworkManager[48882]: <info>  [1765016135.7815] manager: (tapf19a420c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.782 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.782 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf19a420c-d0, col_values=(('external_ids', {'iface-id': 'e6dea8f3-ba9b-4ce4-acbb-0df65f10749a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.783 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_controller[152417]: 2025-12-06T10:15:35Z|00133|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.801 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.803 162267 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.804 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[6e17d88c-a0b4-4ba5-8cbc-00761249211e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.805 162267 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: global
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     log         /dev/log local0 debug
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     log-tag     haproxy-metadata-proxy-f19a420c-d088-44ba-92a5-ba4d8025ce6c
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     user        root
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     group       root
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     maxconn     1024
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     pidfile     /var/lib/neutron/external/pids/f19a420c-d088-44ba-92a5-ba4d8025ce6c.pid.haproxy
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     daemon
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: defaults
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     log global
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     mode http
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     option httplog
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     option dontlognull
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     option http-server-close
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     option forwardfor
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     retries                 3
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     timeout http-request    30s
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     timeout connect         30s
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     timeout client          32s
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     timeout server          32s
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     timeout http-keep-alive 30s
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: listen listener
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     bind 169.254.169.254:80
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:     http-request add-header X-OVN-Network-ID f19a420c-d088-44ba-92a5-ba4d8025ce6c
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.805 254824 DEBUG nova.compute.manager [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG oslo_concurrency.lockutils [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:35 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:35.806 162267 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'env', 'PROCESS_TAG=haproxy-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f19a420c-d088-44ba-92a5-ba4d8025ce6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:15:35 compute-0 nova_compute[254819]: 2025-12-06 10:15:35.806 254824 DEBUG nova.compute.manager [req-901abc2e-e182-468e-b312-a661b754f46b req-e55ea63c-63d2-478b-bf14-09e6f36b5efb d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Processing event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:15:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:35 compute-0 ceph-mon[74327]: pgmap v1035: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:15:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:15:36 compute-0 podman[277384]: 2025-12-06 10:15:36.191866761 +0000 UTC m=+0.054688068 container create 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:15:36 compute-0 systemd[1]: Started libpod-conmon-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope.
Dec 06 10:15:36 compute-0 podman[277384]: 2025-12-06 10:15:36.161802049 +0000 UTC m=+0.024623366 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 06 10:15:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bbfc0c4f48e5f7f64a74644fc0e70facc48d03f5e9acb49b25622ca059f294b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:36 compute-0 podman[277384]: 2025-12-06 10:15:36.279903318 +0000 UTC m=+0.142724675 container init 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 10:15:36 compute-0 podman[277384]: 2025-12-06 10:15:36.284665917 +0000 UTC m=+0.147487224 container start 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:36 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : New worker (277405) forked
Dec 06 10:15:36 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : Loading success.
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:36 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:36 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.872 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.8716395, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.873 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Started (Lifecycle Event)
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.876 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.881 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.886 254824 INFO nova.virt.libvirt.driver [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance spawned successfully.
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.886 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.892 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.895 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.907 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.908 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.909 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.909 254824 DEBUG nova.virt.libvirt.driver [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.917 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.917 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.8770893, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.918 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Paused (Lifecycle Event)
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.944 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.947 254824 DEBUG nova.virt.driver [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] Emitting event <LifecycleEvent: 1765016136.879893, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.947 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Resumed (Lifecycle Event)
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.972 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.974 254824 DEBUG nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.978 254824 INFO nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 7.59 seconds to spawn the instance on the hypervisor.
Dec 06 10:15:36 compute-0 nova_compute[254819]: 2025-12-06 10:15:36.978 254824 DEBUG nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.001 254824 INFO nova.compute.manager [None req-2a977775-2e93-45db-91c9-5b391669a5ca - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.034 254824 INFO nova.compute.manager [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 8.44 seconds to build instance.
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.052 254824 DEBUG oslo_concurrency.lockutils [None req-1e2bd210-71f8-4097-9d67-2e1f353e5119 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:37 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:37.668Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.881 254824 DEBUG nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG oslo_concurrency.lockutils [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.882 254824 DEBUG nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:37 compute-0 nova_compute[254819]: 2025-12-06 10:15:37.883 254824 WARNING nova.compute.manager [req-425f5244-0b2f-4ebf-a7b4-e515c64ec14d req-a2368387-2617-4dfa-b41b-1907992055fd d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received unexpected event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with vm_state active and task_state None.
Dec 06 10:15:37 compute-0 ceph-mon[74327]: pgmap v1036: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 10:15:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:38 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:15:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:39.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:39 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:39 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:39 compute-0 nova_compute[254819]: 2025-12-06 10:15:39.415 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:39 compute-0 ceph-mon[74327]: pgmap v1037: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:40 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec 06 10:15:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec 06 10:15:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:41 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:41 compute-0 ovn_controller[152417]: 2025-12-06T10:15:41Z|00134|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.531 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 compute-0 NetworkManager[48882]: <info>  [1765016141.5325] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec 06 10:15:41 compute-0 NetworkManager[48882]: <info>  [1765016141.5333] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Dec 06 10:15:41 compute-0 ovn_controller[152417]: 2025-12-06T10:15:41Z|00135|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.569 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.574 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.964 254824 DEBUG nova.compute.manager [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG nova.compute.manager [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:41 compute-0 nova_compute[254819]: 2025-12-06 10:15:41.965 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:15:41 compute-0 ceph-mon[74327]: pgmap v1038: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:42 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:42 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:43 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:43 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:43 compute-0 podman[277463]: 2025-12-06 10:15:43.422973155 +0000 UTC m=+0.052873719 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:15:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:43 compute-0 ceph-mon[74327]: pgmap v1039: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 10:15:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:15:44 compute-0 nova_compute[254819]: 2025-12-06 10:15:44.416 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:44 compute-0 nova_compute[254819]: 2025-12-06 10:15:44.430 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:15:44 compute-0 nova_compute[254819]: 2025-12-06 10:15:44.431 254824 DEBUG nova.network.neutron [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:44 compute-0 nova_compute[254819]: 2025-12-06 10:15:44.451 254824 DEBUG oslo_concurrency.lockutils [req-b2886523-f334-4415-a724-d7032080f967 req-dae2d84f-0d74-4d26-8ee2-53ed829c1587 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:44 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:44 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:45 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:45 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:45 compute-0 nova_compute[254819]: 2025-12-06 10:15:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:45.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:15:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.774 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.775 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:46 compute-0 nova_compute[254819]: 2025-12-06 10:15:46.805 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:46 compute-0 ceph-mon[74327]: pgmap v1040: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:15:46 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:46 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:47 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803708120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.266 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.344 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.345 254824 DEBUG nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.499 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.500 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4308MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.500 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.501 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.584 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Instance 7ebb0f0e-b16a-451f-b85a-623f5bcf704f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.585 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.585 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:15:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:47.669Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.705 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.764 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.764 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.780 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:15:47 compute-0 nova_compute[254819]: 2025-12-06 10:15:47.800 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:15:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3121417538' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:15:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3121417538' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:15:47 compute-0 ceph-mon[74327]: pgmap v1041: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:15:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3803708120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:15:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:47.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:15:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.142 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:48 compute-0 podman[277535]: 2025-12-06 10:15:48.441911158 +0000 UTC m=+0.075150709 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.648 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.652 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.669 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.697 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:15:48 compute-0 nova_compute[254819]: 2025-12-06 10:15:48.698 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3505482668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:48 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:48 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:49.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:49 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:49 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.418 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:49 compute-0 nova_compute[254819]: 2025-12-06 10:15:49.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:15:49 compute-0 ceph-mon[74327]: pgmap v1042: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 10:15:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2620383535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:49.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:49 compute-0 ovn_controller[152417]: 2025-12-06T10:15:49Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:72:5e 10.100.0.7
Dec 06 10:15:49 compute-0 ovn_controller[152417]: 2025-12-06T10:15:49Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:72:5e 10.100.0.7
Dec 06 10:15:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 06 10:15:50 compute-0 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:50 compute-0 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:15:50 compute-0 nova_compute[254819]: 2025-12-06 10:15:50.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:15:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4091665223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:50 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a1e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:50] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec 06 10:15:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:15:50] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Dec 06 10:15:51 compute-0 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:51 compute-0 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:51 compute-0 nova_compute[254819]: 2025-12-06 10:15:51.169 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:15:51 compute-0 nova_compute[254819]: 2025-12-06 10:15:51.170 254824 DEBUG nova.objects.instance [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:51 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:51 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:51 compute-0 nova_compute[254819]: 2025-12-06 10:15:51.808 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:51 compute-0 ceph-mon[74327]: pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 06 10:15:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:51.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.773 254824 DEBUG nova.network.neutron [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.797 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.798 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.799 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.800 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.817 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:15:52 compute-0 nova_compute[254819]: 2025-12-06 10:15:52.817 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:52 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:52 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:53 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:53 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:53 compute-0 nova_compute[254819]: 2025-12-06 10:15:53.820 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:53 compute-0 ceph-mon[74327]: pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 06 10:15:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:15:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 10:15:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.246 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:15:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:54 compute-0 sudo[277570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:15:54 compute-0 sudo[277570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:15:54 compute-0 sudo[277570]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:54 compute-0 podman[277594]: 2025-12-06 10:15:54.41852119 +0000 UTC m=+0.068873941 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 10:15:54 compute-0 nova_compute[254819]: 2025-12-06 10:15:54.420 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:54 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:54 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:15:55 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:55 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:15:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:55.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:15:55 compute-0 ceph-mon[74327]: pgmap v1045: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 10:15:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3624299820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:15:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:56 compute-0 nova_compute[254819]: 2025-12-06 10:15:56.811 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:56 compute-0 nova_compute[254819]: 2025-12-06 10:15:56.853 254824 INFO nova.compute.manager [None req-65ace017-84b7-41ed-9c05-5fd6ce5a20dd 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output
Dec 06 10:15:56 compute-0 nova_compute[254819]: 2025-12-06 10:15:56.861 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:15:56 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:56 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1552888335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:15:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:57 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:15:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:15:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:57.670Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:57 compute-0 ovn_controller[152417]: 2025-12-06T10:15:57Z|00136|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:15:57 compute-0 nova_compute[254819]: 2025-12-06 10:15:57.752 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:57 compute-0 ovn_controller[152417]: 2025-12-06T10:15:57Z|00137|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:15:57 compute-0 nova_compute[254819]: 2025-12-06 10:15:57.836 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:15:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:15:57 compute-0 ceph-mon[74327]: pgmap v1046: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:15:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:15:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:15:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:15:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:15:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:58 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:58 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:59.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:15:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:15:59.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:15:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:15:59 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:15:59 compute-0 nova_compute[254819]: 2025-12-06 10:15:59.423 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:59 compute-0 nova_compute[254819]: 2025-12-06 10:15:59.619 254824 INFO nova.compute.manager [None req-d61050ab-a206-4dcc-80cf-59e60c0415e8 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output
Dec 06 10:15:59 compute-0 nova_compute[254819]: 2025-12-06 10:15:59.625 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:15:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:15:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:15:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:15:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:15:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:16:00 compute-0 ceph-mon[74327]: pgmap v1047: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:16:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:16:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:00 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:16:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.199 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:01 compute-0 NetworkManager[48882]: <info>  [1765016161.2015] manager: (patch-br-int-to-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Dec 06 10:16:01 compute-0 NetworkManager[48882]: <info>  [1765016161.2028] manager: (patch-provnet-c81e973e-7ff9-4cd2-9994-daf87649321f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Dec 06 10:16:01 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:01 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.293 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:01 compute-0 ovn_controller[152417]: 2025-12-06T10:16:01Z|00138|binding|INFO|Releasing lport e6dea8f3-ba9b-4ce4-acbb-0df65f10749a from this chassis (sb_readonly=0)
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.302 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:01.400 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:01 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:01.401 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.401 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.538 254824 INFO nova.compute.manager [None req-9476a095-577b-45af-b4a7-0c52a7c4c673 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Get console output
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.543 261881 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 10:16:01 compute-0 sudo[277625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:01 compute-0 sudo[277625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:01 compute-0 sudo[277625]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:01 compute-0 nova_compute[254819]: 2025-12-06 10:16:01.813 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:01 compute-0 sudo[277650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 10:16:01 compute-0 sudo[277650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:01.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:02.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:02 compute-0 ceph-mon[74327]: pgmap v1048: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:16:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:16:02 compute-0 podman[277747]: 2025-12-06 10:16:02.428325806 +0000 UTC m=+0.066238470 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:16:02 compute-0 podman[277747]: 2025-12-06 10:16:02.546863426 +0000 UTC m=+0.184776040 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:16:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e4003690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:02 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:02 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:03 compute-0 podman[277864]: 2025-12-06 10:16:03.12327672 +0000 UTC m=+0.060866885 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:03 compute-0 podman[277864]: 2025-12-06 10:16:03.13475105 +0000 UTC m=+0.072341215 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:03 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:03 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:03 compute-0 podman[277956]: 2025-12-06 10:16:03.455212153 +0000 UTC m=+0.059588941 container exec c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Dec 06 10:16:03 compute-0 podman[277956]: 2025-12-06 10:16:03.476770144 +0000 UTC m=+0.081146932 container exec_died c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG nova.compute.manager [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG nova.compute.manager [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing instance network info cache due to event network-changed-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.624 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.625 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquired lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.625 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Refreshing network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.683 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.684 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.684 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.685 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.685 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.686 254824 INFO nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Terminating instance
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.687 254824 DEBUG nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:16:03 compute-0 podman[278018]: 2025-12-06 10:16:03.704246977 +0000 UTC m=+0.056004113 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:16:03 compute-0 podman[278018]: 2025-12-06 10:16:03.717815893 +0000 UTC m=+0.069573019 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:16:03 compute-0 kernel: tapea0f2c61-7d (unregistering): left promiscuous mode
Dec 06 10:16:03 compute-0 NetworkManager[48882]: <info>  [1765016163.7541] device (tapea0f2c61-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 10:16:03 compute-0 ovn_controller[152417]: 2025-12-06T10:16:03Z|00139|binding|INFO|Releasing lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd from this chassis (sb_readonly=0)
Dec 06 10:16:03 compute-0 ovn_controller[152417]: 2025-12-06T10:16:03Z|00140|binding|INFO|Setting lport ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd down in Southbound
Dec 06 10:16:03 compute-0 ovn_controller[152417]: 2025-12-06T10:16:03Z|00141|binding|INFO|Removing iface tapea0f2c61-7d ovn-installed in OVS
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.768 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.775 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:72:5e 10.100.0.7'], port_security=['fa:16:3e:21:72:5e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7ebb0f0e-b16a-451f-b85a-623f5bcf704f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b402c8d3e2476abc98be42a1e6d34e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d5ca921-3bfd-449d-8b5d-30ae22ce26cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81c5896f-af1e-41c2-8dce-fe719e73d950, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>], logical_port=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70c28558b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.776 162267 INFO neutron.agent.ovn.metadata.agent [-] Port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd in datapath f19a420c-d088-44ba-92a5-ba4d8025ce6c unbound from our chassis
Dec 06 10:16:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.778 162267 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f19a420c-d088-44ba-92a5-ba4d8025ce6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:16:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.779 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a910af-f461-47cb-93cf-ff33bfd96e6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:03 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:03.779 162267 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c namespace which is not needed anymore
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.794 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 06 10:16:03 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000d.scope: Consumed 14.207s CPU time.
Dec 06 10:16:03 compute-0 systemd-machined[216202]: Machine qemu-9-instance-0000000d terminated.
Dec 06 10:16:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:03.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : haproxy version is 2.8.14-c23fe91
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [NOTICE]   (277403) : path to executable is /usr/sbin/haproxy
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : Exiting Master process...
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : Exiting Master process...
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [ALERT]    (277403) : Current worker (277405) exited with code 143 (Terminated)
Dec 06 10:16:03 compute-0 neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c[277399]: [WARNING]  (277403) : All workers exited. Exiting... (0)
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.913 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 systemd[1]: libpod-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope: Deactivated successfully.
Dec 06 10:16:03 compute-0 podman[278104]: 2025-12-06 10:16:03.921323268 +0000 UTC m=+0.052377766 container died 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.924 254824 INFO nova.virt.libvirt.driver [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Instance destroyed successfully.
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.925 254824 DEBUG nova.objects.instance [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lazy-loading 'resources' on Instance uuid 7ebb0f0e-b16a-451f-b85a-623f5bcf704f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92-userdata-shm.mount: Deactivated successfully.
Dec 06 10:16:03 compute-0 podman[278115]: 2025-12-06 10:16:03.950706462 +0000 UTC m=+0.070035872 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.951 254824 DEBUG nova.virt.libvirt.vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:15:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823850228',display_name='tempest-TestNetworkBasicOps-server-1823850228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823850228',id=13,image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZ1JbYqKoCUxIiM8hDMgdZSsRHQUcoBjRF2DOasdBtdUJsR/+RRaag7cOntBUu6Pnxm7ZLVxvld0ACRX3Mi2/RpeAQ5OWV7PuIX+IEnS95lS5yg27/v0AunJEPN78t9BQ==',key_name='tempest-TestNetworkBasicOps-539349224',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b402c8d3e2476abc98be42a1e6d34e',ramdisk_id='',reservation_id='r-tzfybp9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='9489b8a5-a798-4e26-87f9-59bb1eb2e6fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1971100882',owner_user_name='tempest-TestNetworkBasicOps-1971100882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:15:37Z,user_data=None,user_id='03615580775245e6ae335ee9d785611f',uuid=7ebb0f0e-b16a-451f-b85a-623f5bcf704f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.952 254824 DEBUG nova.network.os_vif_util [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converting VIF {"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.953 254824 DEBUG nova.network.os_vif_util [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.953 254824 DEBUG os_vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bbfc0c4f48e5f7f64a74644fc0e70facc48d03f5e9acb49b25622ca059f294b-merged.mount: Deactivated successfully.
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.955 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.955 254824 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea0f2c61-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.956 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.958 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 compute-0 nova_compute[254819]: 2025-12-06 10:16:03.960 254824 INFO os_vif [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:72:5e,bridge_name='br-int',has_traffic_filtering=True,id=ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd,network=Network(f19a420c-d088-44ba-92a5-ba4d8025ce6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea0f2c61-7d')
Dec 06 10:16:03 compute-0 podman[278104]: 2025-12-06 10:16:03.963394424 +0000 UTC m=+0.094448932 container cleanup 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:16:03 compute-0 podman[278115]: 2025-12-06 10:16:03.964043131 +0000 UTC m=+0.083372511 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Dec 06 10:16:03 compute-0 systemd[1]: libpod-conmon-54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92.scope: Deactivated successfully.
Dec 06 10:16:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:04 compute-0 ceph-mon[74327]: pgmap v1049: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.053 254824 DEBUG oslo_concurrency.lockutils [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.054 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.054 254824 DEBUG nova.compute.manager [req-e0092066-0c91-41af-ac43-4c2dd58fe130 req-6aba86a9-e1da-4a7d-b05d-0252ffc48e3a d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-unplugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:16:04 compute-0 podman[278188]: 2025-12-06 10:16:04.063434655 +0000 UTC m=+0.060795603 container remove 54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.070 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1f3163-2ab9-4d84-a5a0-b98c4444a711]: (4, ('Sat Dec  6 10:16:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c (54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92)\n54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92\nSat Dec  6 10:16:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c (54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92)\n54de96a34926ba613f37681a8919578b77b753a4af207fe955eb7d5eee80bc92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.072 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0fb779-6a5d-48be-95fd-a4a2244bafcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.073 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19a420c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.075 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:04 compute-0 kernel: tapf19a420c-d0: left promiscuous mode
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.096 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.100 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[27757000-a321-4b3f-8fbf-f625bce64864]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.117 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4cbad1-8d28-40e4-9f99-5ad15e715470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.119 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5fdabb-a24c-4599-8c73-c0ba90f86519]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.141 260126 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a154c-158f-4938-b050-7e1a5e8db29e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450603, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278243, 'error': None, 'target': 'ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.144 162385 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f19a420c-d088-44ba-92a5-ba4d8025ce6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:16:04 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:04.144 162385 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f0259-a801-4609-a1ca-164ac4ba1076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:04 compute-0 systemd[1]: run-netns-ovnmeta\x2df19a420c\x2dd088\x2d44ba\x2d92a5\x2dba4d8025ce6c.mount: Deactivated successfully.
Dec 06 10:16:04 compute-0 podman[278246]: 2025-12-06 10:16:04.211301368 +0000 UTC m=+0.050525325 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:04 compute-0 podman[278246]: 2025-12-06 10:16:04.241819442 +0000 UTC m=+0.081043359 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.363 254824 INFO nova.virt.libvirt.driver [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deleting instance files /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_del
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.363 254824 INFO nova.virt.libvirt.driver [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deletion of /var/lib/nova/instances/7ebb0f0e-b16a-451f-b85a-623f5bcf704f_del complete
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.416 254824 INFO nova.compute.manager [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 0.73 seconds to destroy the instance on the hypervisor.
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG oslo.service.loopingcall [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.417 254824 DEBUG nova.network.neutron [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:16:04 compute-0 podman[278321]: 2025-12-06 10:16:04.493857036 +0000 UTC m=+0.073068204 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:16:04 compute-0 podman[278321]: 2025-12-06 10:16:04.690963169 +0000 UTC m=+0.270174317 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:16:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:04 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.918 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updated VIF entry in instance network info cache for port ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.918 254824 DEBUG nova.network.neutron [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [{"id": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "address": "fa:16:3e:21:72:5e", "network": {"id": "f19a420c-d088-44ba-92a5-ba4d8025ce6c", "bridge": "br-int", "label": "tempest-network-smoke--1988472625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b402c8d3e2476abc98be42a1e6d34e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea0f2c61-7d", "ovs_interfaceid": "ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:16:04 compute-0 nova_compute[254819]: 2025-12-06 10:16:04.939 254824 DEBUG oslo_concurrency.lockutils [req-bb1b49bd-99dc-4bd5-9257-f7605e50d76b req-9bcc8d37-358e-44d6-ab68-1dcea132aa2f d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Releasing lock "refresh_cache-7ebb0f0e-b16a-451f-b85a-623f5bcf704f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:16:05 compute-0 podman[278432]: 2025-12-06 10:16:05.076810887 +0000 UTC m=+0.049854557 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:05 compute-0 podman[278432]: 2025-12-06 10:16:05.108592195 +0000 UTC m=+0.081635875 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:16:05 compute-0 sudo[277650]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:05 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:05 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:05 compute-0 sudo[278473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:05 compute-0 sudo[278473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:05 compute-0 sudo[278473]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:05 compute-0 sudo[278498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:16:05 compute-0 sudo[278498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:05 compute-0 nova_compute[254819]: 2025-12-06 10:16:05.464 254824 DEBUG nova.network.neutron [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:16:05 compute-0 nova_compute[254819]: 2025-12-06 10:16:05.487 254824 INFO nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Took 1.07 seconds to deallocate network for instance.
Dec 06 10:16:05 compute-0 nova_compute[254819]: 2025-12-06 10:16:05.551 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:05 compute-0 nova_compute[254819]: 2025-12-06 10:16:05.552 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:05 compute-0 nova_compute[254819]: 2025-12-06 10:16:05.624 254824 DEBUG oslo_concurrency.processutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:16:05 compute-0 sudo[278498]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:16:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 955 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:16:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:16:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:16:05 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:16:05 compute-0 sudo[278576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:05 compute-0 sudo[278576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:06 compute-0 sudo[278576]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:06 compute-0 ceph-mon[74327]: pgmap v1050: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:16:06 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:16:06 compute-0 sudo[278601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:16:06 compute-0 sudo[278601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:06 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:16:06 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500206167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.141 254824 DEBUG oslo_concurrency.processutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.149 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.149 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Acquiring lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG oslo_concurrency.lockutils [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.150 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] No waiting events found dispatching network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.151 254824 WARNING nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received unexpected event network-vif-plugged-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd for instance with vm_state deleted and task_state None.
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.151 254824 DEBUG nova.compute.manager [req-3f90431e-1af9-41eb-ab96-b3765e36b5bb req-9a22847b-31cd-4a3b-bf9d-d975cc2661c9 d115944fbcd7470eae10054ca89c839d a4625a082db94534a44dd9543f68be02 - - default default] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Received event network-vif-deleted-ea0f2c61-7dc3-454a-9b4d-2adf5a0a86bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.155 254824 DEBUG nova.compute.provider_tree [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.170 254824 DEBUG nova.scheduler.client.report [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.192 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:06 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec 06 10:16:06 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.216 254824 INFO nova.scheduler.client.report [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Deleted allocations for instance 7ebb0f0e-b16a-451f-b85a-623f5bcf704f
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.287 254824 DEBUG oslo_concurrency.lockutils [None req-f955a611-7403-4053-8f61-3587eec272f5 03615580775245e6ae335ee9d785611f 92b402c8d3e2476abc98be42a1e6d34e - - default default] Lock "7ebb0f0e-b16a-451f-b85a-623f5bcf704f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:06 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:06.403 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.469562852 +0000 UTC m=+0.048152711 container create e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 10:16:06 compute-0 systemd[1]: Started libpod-conmon-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope.
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.44801763 +0000 UTC m=+0.026607499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.579948782 +0000 UTC m=+0.158538621 container init e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.586462378 +0000 UTC m=+0.165052197 container start e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.590823596 +0000 UTC m=+0.169413425 container attach e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:16:06 compute-0 dazzling_goldberg[278683]: 167 167
Dec 06 10:16:06 compute-0 systemd[1]: libpod-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope: Deactivated successfully.
Dec 06 10:16:06 compute-0 conmon[278683]: conmon e0c73b0217ba6354e8c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope/container/memory.events
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.592785809 +0000 UTC m=+0.171375638 container died e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-131f442556f504e84c52a494bb34c8569a6eba2b00f95abf9030fc9221635666-merged.mount: Deactivated successfully.
Dec 06 10:16:06 compute-0 podman[278667]: 2025-12-06 10:16:06.626794247 +0000 UTC m=+0.205384076 container remove e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:16:06 compute-0 systemd[1]: libpod-conmon-e0c73b0217ba6354e8c4a759e856a125dac6dc06ec16f3d2ba75580652977b73.scope: Deactivated successfully.
Dec 06 10:16:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:06 compute-0 nova_compute[254819]: 2025-12-06 10:16:06.815 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:06 compute-0 podman[278707]: 2025-12-06 10:16:06.843648572 +0000 UTC m=+0.064735429 container create a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec 06 10:16:06 compute-0 systemd[1]: Started libpod-conmon-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope.
Dec 06 10:16:06 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:06 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:06 compute-0 podman[278707]: 2025-12-06 10:16:06.823026696 +0000 UTC m=+0.044113603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:06 compute-0 podman[278707]: 2025-12-06 10:16:06.950187999 +0000 UTC m=+0.171274876 container init a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 10:16:06 compute-0 podman[278707]: 2025-12-06 10:16:06.958279227 +0000 UTC m=+0.179366084 container start a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:16:06 compute-0 podman[278707]: 2025-12-06 10:16:06.961314669 +0000 UTC m=+0.182401526 container attach a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:16:07 compute-0 ceph-mon[74327]: pgmap v1051: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 955 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:16:07 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2500206167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:07 compute-0 ceph-mon[74327]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Dec 06 10:16:07 compute-0 ceph-mon[74327]: Cluster is now healthy
Dec 06 10:16:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:07 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:07 compute-0 sharp_thompson[278723]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:16:07 compute-0 sharp_thompson[278723]: --> All data devices are unavailable
Dec 06 10:16:07 compute-0 systemd[1]: libpod-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope: Deactivated successfully.
Dec 06 10:16:07 compute-0 podman[278707]: 2025-12-06 10:16:07.378193775 +0000 UTC m=+0.599280672 container died a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3fcef08b60babb8746615fc316bac9c8c1fdad8abf1e7366e4f967e0b750745-merged.mount: Deactivated successfully.
Dec 06 10:16:07 compute-0 podman[278707]: 2025-12-06 10:16:07.446382216 +0000 UTC m=+0.667469113 container remove a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:16:07 compute-0 systemd[1]: libpod-conmon-a88d26a21d876146f725042ef2a9af24d29fbf9a66e90b089015bceb1ae37ca6.scope: Deactivated successfully.
Dec 06 10:16:07 compute-0 sudo[278601]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:07 compute-0 sudo[278750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:07 compute-0 sudo[278750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:07 compute-0 sudo[278750]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:16:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.670Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:16:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:07.671Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:07 compute-0 sudo[278775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:16:07 compute-0 sudo[278775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Dec 06 10:16:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:08 compute-0 ceph-mon[74327]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.123805977 +0000 UTC m=+0.037595586 container create f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:16:08 compute-0 systemd[1]: Started libpod-conmon-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope.
Dec 06 10:16:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.19390516 +0000 UTC m=+0.107694789 container init f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.199400298 +0000 UTC m=+0.113189947 container start f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.107359183 +0000 UTC m=+0.021148812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:08 compute-0 cranky_ritchie[278859]: 167 167
Dec 06 10:16:08 compute-0 systemd[1]: libpod-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope: Deactivated successfully.
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.203919661 +0000 UTC m=+0.117709300 container attach f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.204944428 +0000 UTC m=+0.118734067 container died f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb4bab2923bf159cbe9f772573096ccdc599d3b88aa2cd8d2aca0544bf937eb2-merged.mount: Deactivated successfully.
Dec 06 10:16:08 compute-0 podman[278843]: 2025-12-06 10:16:08.253078257 +0000 UTC m=+0.166867896 container remove f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:16:08 compute-0 systemd[1]: libpod-conmon-f65fa67d5a9b237e64feda5e4cf00fb045729324a18c67acd16db3180470b90f.scope: Deactivated successfully.
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.443940682 +0000 UTC m=+0.049253182 container create 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:16:08 compute-0 systemd[1]: Started libpod-conmon-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope.
Dec 06 10:16:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.421162286 +0000 UTC m=+0.026474786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.541156186 +0000 UTC m=+0.146468736 container init 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.554681091 +0000 UTC m=+0.159993561 container start 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.558683159 +0000 UTC m=+0.163995649 container attach 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:16:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]: {
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:     "1": [
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:         {
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "devices": [
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "/dev/loop3"
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             ],
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "lv_name": "ceph_lv0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "lv_size": "21470642176",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "name": "ceph_lv0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "tags": {
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.cluster_name": "ceph",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.crush_device_class": "",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.encrypted": "0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.osd_id": "1",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.type": "block",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.vdo": "0",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:                 "ceph.with_tpm": "0"
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             },
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "type": "block",
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:             "vg_name": "ceph_vg0"
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:         }
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]:     ]
Dec 06 10:16:08 compute-0 condescending_ritchie[278901]: }
Dec 06 10:16:08 compute-0 systemd[1]: libpod-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope: Deactivated successfully.
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.859693127 +0000 UTC m=+0.465005587 container died 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-26641ed579d39eead0609cb8293b3ac108b8718717087e943b337e9160f05de0-merged.mount: Deactivated successfully.
Dec 06 10:16:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:08 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00a2a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:08 compute-0 podman[278885]: 2025-12-06 10:16:08.908454573 +0000 UTC m=+0.513767033 container remove 363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ritchie, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 10:16:08 compute-0 systemd[1]: libpod-conmon-363c7836c90c110c5eb7c91aeed1bed78b396508d90cfe1fabdd18818ee1bc3c.scope: Deactivated successfully.
Dec 06 10:16:08 compute-0 sudo[278775]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:08 compute-0 nova_compute[254819]: 2025-12-06 10:16:08.959 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 06 10:16:08 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:16:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:09 compute-0 sudo[278923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:09 compute-0 sudo[278923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:09 compute-0 sudo[278923]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:09.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:09 compute-0 sudo[278948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:16:09 compute-0 sudo[278948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:09 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:09 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.605956956 +0000 UTC m=+0.055983993 container create bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 06 10:16:09 compute-0 systemd[1]: Started libpod-conmon-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope.
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.581837385 +0000 UTC m=+0.031864462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.713615413 +0000 UTC m=+0.163642460 container init bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.722045891 +0000 UTC m=+0.172072918 container start bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.726274295 +0000 UTC m=+0.176301362 container attach bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:16:09 compute-0 laughing_clarke[279032]: 167 167
Dec 06 10:16:09 compute-0 systemd[1]: libpod-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope: Deactivated successfully.
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.727838997 +0000 UTC m=+0.177866024 container died bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1845a5fc4d02dca4b6ffe195de996eef876f722785f7f6b7e63fb7b7958bf7ab-merged.mount: Deactivated successfully.
Dec 06 10:16:09 compute-0 podman[279015]: 2025-12-06 10:16:09.76385594 +0000 UTC m=+0.213882967 container remove bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:16:09 compute-0 systemd[1]: libpod-conmon-bd994f3b0e30fdcbd0b24d9ba7033a6dc24f3d1771bf19c541f43dd7f92f311d.scope: Deactivated successfully.
Dec 06 10:16:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:09 compute-0 podman[279056]: 2025-12-06 10:16:09.97311772 +0000 UTC m=+0.065756076 container create 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:16:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:10 compute-0 systemd[1]: Started libpod-conmon-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope.
Dec 06 10:16:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:09.946228844 +0000 UTC m=+0.038867240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:16:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:10.082566215 +0000 UTC m=+0.175204581 container init 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:10.094448886 +0000 UTC m=+0.187087242 container start 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:10.106586424 +0000 UTC m=+0.199224790 container attach 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:16:10 compute-0 lvm[279146]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:16:10 compute-0 lvm[279146]: VG ceph_vg0 finished
Dec 06 10:16:10 compute-0 lvm[279150]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:16:10 compute-0 lvm[279150]: VG ceph_vg0 finished
Dec 06 10:16:10 compute-0 competent_johnson[279072]: {}
Dec 06 10:16:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:10 compute-0 systemd[1]: libpod-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Deactivated successfully.
Dec 06 10:16:10 compute-0 systemd[1]: libpod-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Consumed 1.083s CPU time.
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:10.813337716 +0000 UTC m=+0.905976062 container died 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 10:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6327b19b91d20332492492173a0337df1307e013919741a4225c764ba454d3-merged.mount: Deactivated successfully.
Dec 06 10:16:10 compute-0 podman[279056]: 2025-12-06 10:16:10.851429945 +0000 UTC m=+0.944068291 container remove 1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_johnson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:16:10 compute-0 systemd[1]: libpod-conmon-1577f8eeb4ffc06bf4bb1d770f9c213f80fe1f041f2fa883845e341acdf35c50.scope: Deactivated successfully.
Dec 06 10:16:10 compute-0 sudo[278948]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:16:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:16:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:16:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:10 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004570 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:16:10 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:11 compute-0 ceph-mon[74327]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:11 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:11 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:16:11 compute-0 sudo[279162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:16:11 compute-0 sudo[279162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:11 compute-0 sudo[279162]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:11 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:11 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d40019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:11 compute-0 nova_compute[254819]: 2025-12-06 10:16:11.816 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:12 compute-0 nova_compute[254819]: 2025-12-06 10:16:12.006 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:12 compute-0 nova_compute[254819]: 2025-12-06 10:16:12.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:12 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:12 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:13 compute-0 ceph-mon[74327]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:13 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:13 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:13 compute-0 nova_compute[254819]: 2025-12-06 10:16:13.963 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:14 compute-0 sudo[279193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:16:14 compute-0 sudo[279193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:14 compute-0 sudo[279193]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:14 compute-0 podman[279192]: 2025-12-06 10:16:14.482944178 +0000 UTC m=+0.104545633 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:16:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001a00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:14 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:14 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:15 compute-0 ceph-mon[74327]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 06 10:16:15 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:15 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:16:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:15.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:16 compute-0 ceph-mon[74327]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:16:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:16 compute-0 nova_compute[254819]: 2025-12-06 10:16:16.819 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:16 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:16 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d4001a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:17 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101617 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:16:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:17.672Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:16:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:18.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:18 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:18 compute-0 nova_compute[254819]: 2025-12-06 10:16:18.923 254824 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016163.9219046, 7ebb0f0e-b16a-451f-b85a-623f5bcf704f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:16:18 compute-0 nova_compute[254819]: 2025-12-06 10:16:18.923 254824 INFO nova.compute.manager [-] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] VM Stopped (Lifecycle Event)
Dec 06 10:16:18 compute-0 nova_compute[254819]: 2025-12-06 10:16:18.952 254824 DEBUG nova.compute.manager [None req-375c1389-ab3f-4408-b044-77d18aba20c6 - - - - - -] [instance: 7ebb0f0e-b16a-451f-b85a-623f5bcf704f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:16:18 compute-0 ceph-mon[74327]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 10:16:18 compute-0 nova_compute[254819]: 2025-12-06 10:16:18.966 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:19.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:19 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:51930] [POST] [200] [0.003s] [4.0B] [acb5433b-d0b7-408c-abb1-d799d504c557] /api/prometheus_receiver
Dec 06 10:16:19 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:19 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:19 compute-0 podman[279241]: 2025-12-06 10:16:19.517881156 +0000 UTC m=+0.134666928 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:16:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:20.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:16:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:16:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:20 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:20 compute-0 ceph-mon[74327]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:21 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:21 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f40045f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:21 compute-0 nova_compute[254819]: 2025-12-06 10:16:21.821 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:16:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:21.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:16:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:22.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:22 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:22 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:22 compute-0 ceph-mon[74327]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:23 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:23 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:16:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:16:23
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', 'images', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Dec 06 10:16:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:16:23 compute-0 nova_compute[254819]: 2025-12-06 10:16:23.970 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:16:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:24.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:16:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:16:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004610 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:24 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:24 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:25 compute-0 ceph-mon[74327]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 06 10:16:25 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:25 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:25 compute-0 podman[279273]: 2025-12-06 10:16:25.479022399 +0000 UTC m=+0.101349947 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:26.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d00045b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:26 compute-0 nova_compute[254819]: 2025-12-06 10:16:26.882 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:26 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:27 compute-0 ceph-mon[74327]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:27 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:27.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:28 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:28 compute-0 nova_compute[254819]: 2025-12-06 10:16:28.973 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:29 compute-0 ceph-mon[74327]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:29 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:29 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:16:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:29.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:16:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:30 compute-0 ceph-mon[74327]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:16:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:16:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:30 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f660c00b630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:31 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:31 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:31 compute-0 nova_compute[254819]: 2025-12-06 10:16:31.885 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:16:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:16:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:32.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:32 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:32 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65f4004670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:32 compute-0 ceph-mon[74327]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:33 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:33 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65e8002830 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:33 compute-0 nova_compute[254819]: 2025-12-06 10:16:33.976 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:34.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:34 compute-0 sudo[279303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:16:34 compute-0 sudo[279303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:34 compute-0 sudo[279303]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d0004820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:34 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:34 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 06 10:16:34 compute-0 ceph-mon[74327]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:35 compute-0 kernel: ganesha.nfsd[277147]: segfault at 50 ip 00007f66bab6c32e sp 00007f66727fb210 error 4 in libntirpc.so.5.8[7f66bab51000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 06 10:16:35 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 06 10:16:35 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck[267043]: 06/12/2025 10:16:35 : epoch 693400e9 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f65d400c560 fd 39 proxy ignored for local
Dec 06 10:16:35 compute-0 systemd[1]: Started Process Core Dump (PID 279328/UID 0).
Dec 06 10:16:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:35.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:36 compute-0 systemd-coredump[279329]: Process 267051 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 81:
                                                    #0  0x00007f66bab6c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 06 10:16:36 compute-0 systemd[1]: systemd-coredump@11-279328-0.service: Deactivated successfully.
Dec 06 10:16:36 compute-0 systemd[1]: systemd-coredump@11-279328-0.service: Consumed 1.148s CPU time.
Dec 06 10:16:36 compute-0 podman[279336]: 2025-12-06 10:16:36.696240423 +0000 UTC m=+0.022492868 container died c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1734ccd679f2dc6c6c68ccfec5ec524b9e349d18b823990645a69f0aafaa48d8-merged.mount: Deactivated successfully.
Dec 06 10:16:36 compute-0 podman[279336]: 2025-12-06 10:16:36.735427611 +0000 UTC m=+0.061680036 container remove c075298cf4218136c3d2292ce2beb5212b60757ab32882219e2a8e8be2cdcf16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-nfs-cephfs-2-0-compute-0-dfwxck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:16:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Main process exited, code=exited, status=139/n/a
Dec 06 10:16:36 compute-0 nova_compute[254819]: 2025-12-06 10:16:36.887 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:16:36 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.397s CPU time.
Dec 06 10:16:37 compute-0 ceph-mon[74327]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:37.674Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:38.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:16:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:38 compute-0 nova_compute[254819]: 2025-12-06 10:16:38.980 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:39 compute-0 ceph-mon[74327]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 06 10:16:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:39.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:16:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:40] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:16:41 compute-0 ceph-mon[74327]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:41 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101641 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:16:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:41 compute-0 nova_compute[254819]: 2025-12-06 10:16:41.935 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:41.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:42 compute-0 ovn_controller[152417]: 2025-12-06T10:16:42Z|00142|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 06 10:16:43 compute-0 ceph-mon[74327]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:43.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:43 compute-0 nova_compute[254819]: 2025-12-06 10:16:43.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:44 compute-0 ceph-mon[74327]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:45 compute-0 podman[279389]: 2025-12-06 10:16:45.656963597 +0000 UTC m=+0.273866805 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 10:16:45 compute-0 nova_compute[254819]: 2025-12-06 10:16:45.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:45 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:45 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 10:16:45 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:45.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 10:16:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:16:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:16:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:16:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:16:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:16:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3526346481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:16:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.782 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.783 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:16:46 compute-0 nova_compute[254819]: 2025-12-06 10:16:46.976 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:47 compute-0 ceph-mon[74327]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Scheduled restart job, restart counter is at 12.
Dec 06 10:16:47 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:16:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Consumed 2.397s CPU time.
Dec 06 10:16:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Start request repeated too quickly.
Dec 06 10:16:47 compute-0 systemd[1]: ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258@nfs.cephfs.2.0.compute-0.dfwxck.service: Failed with result 'exit-code'.
Dec 06 10:16:47 compute-0 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.dfwxck for 5ecd3f74-dade-5fc4-92ce-8950ae424258.
Dec 06 10:16:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:16:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098041809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.339 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.538 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.539 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4516MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.540 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.540 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.606 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.606 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:16:47 compute-0 nova_compute[254819]: 2025-12-06 10:16:47.621 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:16:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:47.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:47 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:47 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:47 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:47.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3098041809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:16:48 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494624022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.092 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.099 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.114 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.139 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.139 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:48 compute-0 nova_compute[254819]: 2025-12-06 10:16:48.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:49 compute-0 ceph-mon[74327]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3494624022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2009192655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:49 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:49 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:49 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4057191728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:50 compute-0 podman[279461]: 2025-12-06 10:16:50.51070904 +0000 UTC m=+0.128521831 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 10:16:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:16:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:16:50] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Dec 06 10:16:51 compute-0 ceph-mon[74327]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:51 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:51 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:51 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:51 compute-0 nova_compute[254819]: 2025-12-06 10:16:51.977 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:52 compute-0 ceph-mon[74327]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:16:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.132 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.162 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.182 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.183 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:52 compute-0 nova_compute[254819]: 2025-12-06 10:16:52.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:16:53 compute-0 nova_compute[254819]: 2025-12-06 10:16:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:53 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:53 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:53 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:16:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:53 compute-0 nova_compute[254819]: 2025-12-06 10:16:53.990 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:16:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:54 compute-0 sudo[279492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:16:54 compute-0 sudo[279492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:16:54 compute-0 sudo[279492]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:54 compute-0 ceph-mon[74327]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:16:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:55 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:55 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:55 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1188976781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:56 compute-0 podman[279519]: 2025-12-06 10:16:56.45496373 +0000 UTC m=+0.077372341 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:16:56 compute-0 nova_compute[254819]: 2025-12-06 10:16:56.979 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:56 compute-0 ceph-mon[74327]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3022049998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:16:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:16:57.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:16:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:57 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:57 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:57 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:16:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:16:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:16:58 compute-0 nova_compute[254819]: 2025-12-06 10:16:58.994 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:59 compute-0 ceph-mon[74327]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 06 10:16:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:16:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:16:59 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:16:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:16:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:16:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:00.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:17:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:17:01 compute-0 ceph-mon[74327]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:01 compute-0 nova_compute[254819]: 2025-12-06 10:17:01.984 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:17:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:17:03 compute-0 ceph-mon[74327]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:03 compute-0 nova_compute[254819]: 2025-12-06 10:17:03.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:04.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:04 compute-0 ceph-mon[74327]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [WARNING] 339/101704 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 06 10:17:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/101704 (4) : haproxy version is 2.3.17-d1c9119
Dec 06 10:17:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [NOTICE] 339/101704 (4) : path to executable is /usr/local/sbin/haproxy
Dec 06 10:17:04 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue[96127]: [ALERT] 339/101704 (4) : backend 'backend' has no server available!
Dec 06 10:17:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:06.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:06 compute-0 nova_compute[254819]: 2025-12-06 10:17:06.985 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:07 compute-0 ceph-mon[74327]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:07.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:17:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:07.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:08.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:17:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:09 compute-0 nova_compute[254819]: 2025-12-06 10:17:09.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:09 compute-0 ceph-mon[74327]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 06 10:17:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:17:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:10.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:17:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:17:11 compute-0 ceph-mon[74327]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:17:11 compute-0 sudo[279552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:17:11 compute-0 sudo[279552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:11 compute-0 sudo[279552]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:11 compute-0 sudo[279577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:17:11 compute-0 sudo[279577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:11 compute-0 sudo[279577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:17:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:11 compute-0 nova_compute[254819]: 2025-12-06 10:17:11.988 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:17:12 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:17:12 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:12.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:12 compute-0 sudo[279635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:17:12 compute-0 sudo[279635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:12 compute-0 sudo[279635]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:12 compute-0 sudo[279660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:17:12 compute-0 sudo[279660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.742918744 +0000 UTC m=+0.113861816 container create f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.672171944 +0000 UTC m=+0.043115096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:12 compute-0 systemd[1]: Started libpod-conmon-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope.
Dec 06 10:17:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.854900578 +0000 UTC m=+0.225843730 container init f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 06 10:17:12 compute-0 sshd-session[279739]: Accepted publickey for zuul from 192.168.122.10 port 44722 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.865162145 +0000 UTC m=+0.236105247 container start f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.870653844 +0000 UTC m=+0.241597016 container attach f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:17:12 compute-0 adoring_driscoll[279743]: 167 167
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.874155598 +0000 UTC m=+0.245098680 container died f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:17:12 compute-0 systemd-logind[795]: New session 56 of user zuul.
Dec 06 10:17:12 compute-0 systemd[1]: libpod-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope: Deactivated successfully.
Dec 06 10:17:12 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 06 10:17:12 compute-0 sshd-session[279739]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 10:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c55d6728a188dbcd5ea44547e53b3f3c67d8e275c197f3f5ac1c85367e52ab9-merged.mount: Deactivated successfully.
Dec 06 10:17:12 compute-0 podman[279725]: 2025-12-06 10:17:12.928172036 +0000 UTC m=+0.299115108 container remove f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_driscoll, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:17:12 compute-0 systemd[1]: libpod-conmon-f25fb23215458ba5cf5eb8b1e7c22bcba892fec339caf143d178304b9c577d08.scope: Deactivated successfully.
Dec 06 10:17:13 compute-0 sudo[279764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 10:17:13 compute-0 sudo[279764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:17:13 compute-0 ceph-mon[74327]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Dec 06 10:17:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:17:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:17:13 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.100865949 +0000 UTC m=+0.050173926 container create 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:17:13 compute-0 systemd[1]: Started libpod-conmon-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope.
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.0808941 +0000 UTC m=+0.030202057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.220700695 +0000 UTC m=+0.170008652 container init 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.237049206 +0000 UTC m=+0.186357143 container start 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.24012285 +0000 UTC m=+0.189430787 container attach 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 06 10:17:13 compute-0 nifty_lamport[279814]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:17:13 compute-0 nifty_lamport[279814]: --> All data devices are unavailable
Dec 06 10:17:13 compute-0 podman[279796]: 2025-12-06 10:17:13.620712625 +0000 UTC m=+0.570020562 container died 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:17:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:17:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:13.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:14 compute-0 systemd[1]: libpod-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope: Deactivated successfully.
Dec 06 10:17:14 compute-0 nova_compute[254819]: 2025-12-06 10:17:14.046 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-db6acb06397ef6f4f17be03a34a67d3a8a01dc8854a255c124f70025d68d0574-merged.mount: Deactivated successfully.
Dec 06 10:17:14 compute-0 podman[279796]: 2025-12-06 10:17:14.084851847 +0000 UTC m=+1.034159774 container remove 3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_lamport, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:17:14 compute-0 systemd[1]: libpod-conmon-3245e3b6601e60fe877ac5144bbf92e8a9ffd0b3af0624971334e459143ac711.scope: Deactivated successfully.
Dec 06 10:17:14 compute-0 ceph-mon[74327]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:17:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:14.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:14 compute-0 sudo[279660]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:14 compute-0 sudo[279866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:17:14 compute-0 sudo[279866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:14 compute-0 sudo[279866]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:14 compute-0 sudo[279901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:17:14 compute-0 sudo[279901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.675532596 +0000 UTC m=+0.045796607 container create 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:17:14 compute-0 systemd[1]: Started libpod-conmon-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope.
Dec 06 10:17:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:14 compute-0 sudo[280019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.65865806 +0000 UTC m=+0.028922071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:14 compute-0 sudo[280019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:14 compute-0 sudo[280019]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.762324369 +0000 UTC m=+0.132588400 container init 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.768893716 +0000 UTC m=+0.139157727 container start 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.772619838 +0000 UTC m=+0.142883849 container attach 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:17:14 compute-0 crazy_bhaskara[280045]: 167 167
Dec 06 10:17:14 compute-0 systemd[1]: libpod-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope: Deactivated successfully.
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.773907462 +0000 UTC m=+0.144171473 container died 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6493d543af632e5cf14a873aeb0887ce30f41e86e90be03efadd39b544a044a-merged.mount: Deactivated successfully.
Dec 06 10:17:14 compute-0 podman[279995]: 2025-12-06 10:17:14.810948162 +0000 UTC m=+0.181212183 container remove 78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:17:14 compute-0 systemd[1]: libpod-conmon-78be7de6960486fb7baa48ad1e481a61cf9d920d89c5430c19fc6645679a17f9.scope: Deactivated successfully.
Dec 06 10:17:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.008515527 +0000 UTC m=+0.059285282 container create 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:14.971537739 +0000 UTC m=+0.022307484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:15 compute-0 systemd[1]: Started libpod-conmon-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope.
Dec 06 10:17:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.114599761 +0000 UTC m=+0.165369556 container init 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.121575029 +0000 UTC m=+0.172344774 container start 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.128197858 +0000 UTC m=+0.178967594 container attach 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 10:17:15 compute-0 magical_shaw[280111]: {
Dec 06 10:17:15 compute-0 magical_shaw[280111]:     "1": [
Dec 06 10:17:15 compute-0 magical_shaw[280111]:         {
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "devices": [
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "/dev/loop3"
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             ],
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "lv_name": "ceph_lv0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "lv_size": "21470642176",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "name": "ceph_lv0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "tags": {
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.cluster_name": "ceph",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.crush_device_class": "",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.encrypted": "0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.osd_id": "1",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.type": "block",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.vdo": "0",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:                 "ceph.with_tpm": "0"
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             },
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "type": "block",
Dec 06 10:17:15 compute-0 magical_shaw[280111]:             "vg_name": "ceph_vg0"
Dec 06 10:17:15 compute-0 magical_shaw[280111]:         }
Dec 06 10:17:15 compute-0 magical_shaw[280111]:     ]
Dec 06 10:17:15 compute-0 magical_shaw[280111]: }
Dec 06 10:17:15 compute-0 systemd[1]: libpod-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope: Deactivated successfully.
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.428799875 +0000 UTC m=+0.479569600 container died 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee41b0d2f710803bfcf68d3d2d1a6b5207a36b2c9db0fbba0e1f6ac6c41ce7a5-merged.mount: Deactivated successfully.
Dec 06 10:17:15 compute-0 podman[280081]: 2025-12-06 10:17:15.46714373 +0000 UTC m=+0.517913445 container remove 1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:17:15 compute-0 systemd[1]: libpod-conmon-1cc3959496c5e7799cc22d6e30cf0438e2664ae3e81f9b706195b10bd48d4153.scope: Deactivated successfully.
Dec 06 10:17:15 compute-0 sudo[279901]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:15 compute-0 sudo[280178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:17:15 compute-0 sudo[280178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:15 compute-0 sudo[280178]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:15 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25475 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:15 compute-0 sudo[280203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:17:15 compute-0 sudo[280203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:15 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17010 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:15 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17016 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:17:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.008775194 +0000 UTC m=+0.052379015 container create 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:17:16 compute-0 systemd[1]: Started libpod-conmon-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope.
Dec 06 10:17:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:15.99156127 +0000 UTC m=+0.035165121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.099181875 +0000 UTC m=+0.142785746 container init 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.10639291 +0000 UTC m=+0.149996751 container start 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.110733157 +0000 UTC m=+0.154336988 container attach 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:17:16 compute-0 elated_poincare[280311]: 167 167
Dec 06 10:17:16 compute-0 systemd[1]: libpod-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope: Deactivated successfully.
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.112883155 +0000 UTC m=+0.156486986 container died 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:17:16 compute-0 podman[280308]: 2025-12-06 10:17:16.119829553 +0000 UTC m=+0.070704821 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:17:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:16.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1221786c2bf1df9f41a96794a794db18d5744c68accfdc65422246a993177e6-merged.mount: Deactivated successfully.
Dec 06 10:17:16 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25484 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 podman[280275]: 2025-12-06 10:17:16.1560451 +0000 UTC m=+0.199648931 container remove 95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_poincare, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 10:17:16 compute-0 systemd[1]: libpod-conmon-95700acd1d986409e7fc0af29862d9d9c024ff7fb9dc25f5ce520a5b9f97cddc.scope: Deactivated successfully.
Dec 06 10:17:16 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 podman[280357]: 2025-12-06 10:17:16.367713206 +0000 UTC m=+0.047571186 container create 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:17:16 compute-0 systemd[1]: Started libpod-conmon-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope.
Dec 06 10:17:16 compute-0 podman[280357]: 2025-12-06 10:17:16.349970797 +0000 UTC m=+0.029828827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:17:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:17:16 compute-0 podman[280357]: 2025-12-06 10:17:16.478720623 +0000 UTC m=+0.158578633 container init 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 10:17:16 compute-0 podman[280357]: 2025-12-06 10:17:16.48528795 +0000 UTC m=+0.165145940 container start 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 10:17:16 compute-0 podman[280357]: 2025-12-06 10:17:16.505588808 +0000 UTC m=+0.185446818 container attach 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 06 10:17:16 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 06 10:17:16 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306672627' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.25475 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.17010 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.17016 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3986376339' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3306672627' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:16 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1630414090' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:16 compute-0 nova_compute[254819]: 2025-12-06 10:17:16.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:17 compute-0 lvm[280491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:17:17 compute-0 lvm[280491]: VG ceph_vg0 finished
Dec 06 10:17:17 compute-0 gracious_mayer[280392]: {}
Dec 06 10:17:17 compute-0 systemd[1]: libpod-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Deactivated successfully.
Dec 06 10:17:17 compute-0 podman[280357]: 2025-12-06 10:17:17.178691933 +0000 UTC m=+0.858549923 container died 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:17:17 compute-0 systemd[1]: libpod-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Consumed 1.146s CPU time.
Dec 06 10:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e755c6c6bea820afc6d081b53f5851a9b2d868b748635dec23b51762f43aed17-merged.mount: Deactivated successfully.
Dec 06 10:17:17 compute-0 podman[280357]: 2025-12-06 10:17:17.407455079 +0000 UTC m=+1.087313069 container remove 171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mayer, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:17:17 compute-0 systemd[1]: libpod-conmon-171552d28f1d7ba5a0ba146794e397e97bbc0162793eb53b5c121dd3d475b76d.scope: Deactivated successfully.
Dec 06 10:17:17 compute-0 sudo[280203]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:17:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:17 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:17:17 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:17 compute-0 sudo[280526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:17:17 compute-0 sudo[280526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:17 compute-0 sudo[280526]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:17.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:17:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:17:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:17:18 compute-0 ceph-mon[74327]: from='client.25484 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:18 compute-0 ceph-mon[74327]: from='client.26341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:18 compute-0 ceph-mon[74327]: from='client.17028 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:18 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:17:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:18.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:19 compute-0 ceph-mon[74327]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:19 compute-0 nova_compute[254819]: 2025-12-06 10:17:19.051 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:20.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:17:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:17:21 compute-0 ceph-mon[74327]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:21 compute-0 podman[280625]: 2025-12-06 10:17:21.216622678 +0000 UTC m=+0.106052835 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:17:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:21 compute-0 nova_compute[254819]: 2025-12-06 10:17:21.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:22 compute-0 ovs-vsctl[280679]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 10:17:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:17:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:22.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:17:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 10:17:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 10:17:23 compute-0 ceph-mon[74327]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 10:17:23 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: cache status {prefix=cache status} (starting...)
Dec 06 10:17:23 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:23 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: client ls {prefix=client ls} (starting...)
Dec 06 10:17:23 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:23 compute-0 lvm[281019]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:17:23 compute-0 lvm[281019]: VG ceph_vg0 finished
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:17:23
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'backups', 'cephfs.cephfs.data']
Dec 06 10:17:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:17:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:17:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:24 compute-0 nova_compute[254819]: 2025-12-06 10:17:24.054 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:24.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:24 compute-0 ceph-mon[74327]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 06 10:17:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26362 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17049 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26380 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:17:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:17:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295855214' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17073 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:17:24 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439237568' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 10:17:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:17:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092890167' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.26362 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.17049 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.25508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.26380 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1668178014' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2295855214' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.17073 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.25523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/439237568' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.26395 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/644308478' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1092890167' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17091 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26410 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 06 10:17:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707223938' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17106 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25556 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: ops {prefix=ops} (starting...)
Dec 06 10:17:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Dec 06 10:17:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:25.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 06 10:17:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289716866' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:17:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:26.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 06 10:17:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469772445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.17091 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.25532 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1011187621' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.26410 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3220109428' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2707223938' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/901373131' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.17106 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.25556 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3753402528' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3340143316' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1289716866' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3469772445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: session ls {prefix=session ls} (starting...)
Dec 06 10:17:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17148 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 06 10:17:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669643766' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: status {prefix=status} (starting...)
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26467 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17172 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25592 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:26 compute-0 nova_compute[254819]: 2025-12-06 10:17:26.995 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187974181' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/675051402' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2227707575' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/741915433' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.26443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.17148 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.25586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3669643766' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/430822803' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.26467 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4128122182' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.17172 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.25592 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3824211357' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/187974181' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2688023797' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3176248467' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957817502' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:27 compute-0 podman[281530]: 2025-12-06 10:17:27.443447846 +0000 UTC m=+0.064488602 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 06 10:17:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900583648' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:27.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:17:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec 06 10:17:27 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26524 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:27.929+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:27 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17232 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:28.181+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25631 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:17:28.249+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/957817502' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3343059054' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1402062969' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3246663567' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2900583648' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3581068770' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2175235751' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3965562810' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1821647303' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2898722618' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.26524 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4240707576' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 06 10:17:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536510514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 06 10:17:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180736574' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:17:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936739377' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26560 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 nova_compute[254819]: 2025-12-06 10:17:29.058 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 06 10:17:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248478302' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26587 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25670 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.17232 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.25631 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3536510514' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3919289705' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2579086106' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/912603139' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1180736574' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1283122563' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3936739377' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.26560 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/357451755' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/98512507' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2248478302' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/447826151' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 06 10:17:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884993703' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17319 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25694 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:30.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:52.787944+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 4055040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:53.788126+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981173 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf1d9800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.767833710s of 27.774271011s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:54.788284+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:55.788449+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:56.788662+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:57.788969+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:58.789111+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978769 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:44:59.789341+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf3a9000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:00.789458+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:01.789808+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:02.790101+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:03.790436+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979690 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:04.790635+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:05.790805+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:06.791054+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:07.791205+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.826157570s of 13.835227013s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:08.791440+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979558 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:09.791574+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:10.791727+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:11.791886+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:12.792046+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:13.792448+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979690 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:14.792720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:15.793050+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:16.793250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:17.793389+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:18.793667+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981202 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:19.793865+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:20.794122+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:21.794365+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:22.794526+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.431484222s of 14.688600540s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:23.794686+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980611 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:24.794870+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:25.795061+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:26.795195+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:27.795390+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:28.795557+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:29.795699+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:30.795843+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:31.796109+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:32.796273+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:33.796451+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:34.796600+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:35.796768+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:36.796912+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:37.797054+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:38.797191+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:39.797398+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:40.797573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:41.797740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:42.797873+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:43.798027+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:44.798168+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:45.798350+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:46.798515+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:47.798686+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:48.798919+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:49.799094+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:50.799262+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:51.799427+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:52.799576+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:53.799782+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:54.799944+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:55.800112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:56.800397+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:57.800556+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 4161536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:58.803282+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:45:59.803433+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:00.803609+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:01.803859+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:02.804014+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 4136960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:03.804158+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:04.804304+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:05.804450+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:06.804528+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:07.804727+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf8c6000 session 0x55fce0e885a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:08.804899+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 4104192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:09.805113+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 4104192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:10.805278+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:11.805519+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:12.805678+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:13.805835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 4087808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:14.806084+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:15.806275+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:16.806609+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:17.806798+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:18.806946+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980479 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:19.807221+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.066238403s of 57.868534088s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:20.807400+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:21.807720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:22.808027+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:23.808383+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982123 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:24.808504+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f89000 session 0x55fce11163c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:25.808670+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:26.808909+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2134800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:27.809086+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:28.809252+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:29.809415+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:30.809663+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 4022272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:31.809987+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 4014080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc9f0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:32.810185+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.951421738s of 12.081957817s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 4014080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:33.810590+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983503 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:34.810824+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:35.811178+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 4005888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:36.811342+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:37.811512+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:38.811640+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:39.811801+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:40.811948+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:41.812208+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:42.812375+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:43.812548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983635 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:44.812757+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:45.812968+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:46.813129+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:47.813304+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.536516190s of 15.642930984s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:48.813542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:49.813720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:50.813925+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:51.814227+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:52.814424+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:53.814652+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:54.814821+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:55.815003+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:56.815220+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:57.815406+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2134800 session 0x55fce23c01e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbacd20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:58.815562+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:46:59.816250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:00.816375+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:01.816547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:02.816719+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:03.816857+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982321 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:04.817068+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 3899392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:05.817201+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 3899392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:06.817347+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:07.817640+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:08.817946+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.969430923s of 20.490276337s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:09.818090+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:10.818322+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 3874816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:11.818590+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:12.818834+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:30.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:13.819011+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982453 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:14.819151+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f89000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:15.819301+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:16.819461+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:17.819642+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:18.819852+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:19.819976+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:20.820106+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:21.820280+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:22.820421+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:23.820573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:24.820720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.678708076s of 16.780500412s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:25.820871+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:26.821017+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:27.821206+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:28.821374+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:29.821554+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:30.821694+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:31.821872+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:32.822111+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 3792896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:33.822336+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 3792896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:34.822554+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 3784704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:35.822795+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 3784704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:36.823039+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:37.823280+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:38.823562+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:39.823737+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:40.823906+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:41.824207+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:42.824567+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 3760128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:43.824837+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 3760128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:44.825031+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 3751936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:45.825271+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 3751936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:46.825567+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:47.825901+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:48.826052+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:49.826358+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:50.826548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:51.826751+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:52.826895+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 3719168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:53.827070+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 3719168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:54.827368+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:55.827618+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 3710976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:56.827802+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:57.828257+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:58.828464+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 3702784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:47:59.828761+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:00.828916+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:01.829116+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 3694592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:02.829286+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:03.829456+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 3686400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:04.829719+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:05.829974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 3678208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:06.830177+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:07.830359+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 3670016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:08.830639+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:09.830907+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:10.831169+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:11.831449+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:12.831797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:13.832023+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:14.832245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:15.832544+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:16.832741+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:17.832913+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:18.833045+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:19.833190+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:20.833359+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:21.833604+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:22.833715+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:23.833880+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:24.834012+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:25.834146+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:26.834267+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:27.834393+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:28.834538+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:29.834666+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:30.834958+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:31.835229+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:32.835564+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:33.835728+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:34.835879+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:35.836033+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:36.836154+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:37.836403+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:38.836625+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:39.836771+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:40.836908+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:41.837115+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:42.837261+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:43.837466+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f8a000 session 0x55fce23c0960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:44.837783+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:45.838040+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:46.838327+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:47.838576+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:48.838765+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:49.838961+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983833 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:50.839255+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:51.839568+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:52.839752+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:53.840027+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 3489792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.595787048s of 88.759666443s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:54.840217+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983965 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:55.840551+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 3481600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:56.840768+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 3473408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:57.841088+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:58.841252+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 3465216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:48:59.841445+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985477 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:00.841830+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 3448832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:01.842054+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:02.842190+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:03.842355+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 3440640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:04.842582+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:05.842718+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:06.842886+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 3432448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8411 writes, 34K keys, 8411 commit groups, 1.0 writes per commit group, ingest: 21.58 MB, 0.04 MB/s
                                           Interval WAL: 8411 writes, 1732 syncs, 4.86 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:07.843055+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:08.843227+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 3358720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:09.843392+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:10.843586+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.758087158s of 16.814682007s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 3350528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:11.843763+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:12.844007+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 3342336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:13.844240+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:14.844401+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:15.844559+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 3334144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:16.844762+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:17.844916+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:18.845045+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 3325952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:19.845196+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 3317760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:20.845338+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 3317760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:21.845518+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 3309568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:22.845647+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 3301376 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:23.845874+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:24.846083+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:25.846287+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 3293184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:26.846564+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 3284992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:27.846689+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 3284992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:28.846861+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:29.847037+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:30.847177+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 3276800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:31.847362+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 3268608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:32.847509+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 3268608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:33.847626+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 3252224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:34.847796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 3252224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:35.847943+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:36.848109+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:37.848222+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 3244032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:38.848361+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:39.848717+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:40.848913+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 3235840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:41.849144+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 3227648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:42.849335+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce1fae960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 3227648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:43.849592+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 3219456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:44.849816+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 3219456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:45.849945+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:46.850120+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:47.850246+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 3211264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:48.850399+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 3203072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:49.850628+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984754 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 3203072 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:50.850795+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 3194880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:51.850969+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 3194880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:52.851166+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:53.851550+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.575794220s of 42.580799103s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:54.851717+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984886 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 3186688 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:55.851891+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 3178496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:56.852134+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 3178496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:57.852368+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:58.852522+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:49:59.852780+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986398 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 3170304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:00.852922+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 3162112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:01.853118+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 3162112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:02.853242+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 3153920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:03.853400+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 3153920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:04.853583+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985807 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:05.853766+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:06.854013+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 3145728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:07.854135+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 3137536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:08.854291+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.588759422s of 15.598855019s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 3137536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:09.854465+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:10.854770+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:11.855067+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 3129344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:12.855217+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 3121152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:13.855372+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 3121152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:14.855555+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:15.855690+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:16.855837+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 3112960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:17.856054+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 3104768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:18.856248+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 3104768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:19.856462+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:20.856663+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:21.856856+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 3096576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:22.856997+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:23.857123+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 3088384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:24.857262+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:25.857386+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:26.857553+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 3080192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:27.857681+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:28.857835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 3072000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:29.857970+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 3055616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:30.858116+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 3055616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:31.858393+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:32.858535+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:33.858873+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 3047424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:34.859019+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:35.859220+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:36.859368+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 3039232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:37.859599+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 3031040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:38.859727+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:39.859864+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:40.860057+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 3022848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:41.860261+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 3014656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:42.860428+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 3014656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:43.860589+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:44.860879+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:45.861205+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 3006464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:46.861773+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 2998272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:47.861984+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 2998272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:48.862228+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 2990080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:49.862514+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 2990080 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:50.862718+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:51.862961+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:52.863117+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 2981888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:53.863256+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:54.863430+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:55.863569+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 2973696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:56.863695+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 2965504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:57.863963+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 2965504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:58.864101+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 2957312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:50:59.864250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 2957312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:00.864418+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:01.864572+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 2949120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:02.864745+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:03.864976+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:04.865246+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 2940928 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:05.865556+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:06.865699+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:07.865861+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 2932736 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:08.866062+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:09.866220+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:10.866406+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:11.866613+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:12.866783+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:13.866953+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:14.867115+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:15.867275+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:16.867405+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:17.867539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:18.867672+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:19.867803+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:20.867957+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:21.868223+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:22.868331+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 2924544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:23.868452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:24.868635+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:25.868834+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:26.869005+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:27.869266+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:28.869384+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:29.869691+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 2916352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 80.777481079s of 80.781913757s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:30.869842+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 3833856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:31.870170+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 3768320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:32.870282+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 3768320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f89000 session 0x55fce23c10e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf8c6000 session 0x55fce1e9a780
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:33.870415+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 3670016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:34.870531+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:35.870676+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:36.870856+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:37.871031+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:38.871219+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:39.871347+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985675 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:40.871452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:41.871640+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:42.871976+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:43.872109+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.180529594s of 14.258139610s, submitted: 375
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:44.872289+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985807 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:45.881250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:46.881381+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 3645440 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:47.881534+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:48.881662+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:49.881819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:50.881955+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987319 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:51.882117+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:52.882304+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:53.882595+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:54.882740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:55.882890+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:56.883087+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:57.883288+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:58.883441+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.019550323s of 14.427960396s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:51:59.883547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:00.883701+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:01.883867+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:02.884068+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:03.884242+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:04.884394+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:05.884523+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:06.884685+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:07.884837+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:08.884988+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:09.885241+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:10.885388+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:11.885577+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:12.885736+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:13.885920+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:14.886163+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:15.886345+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:16.886550+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:17.886715+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:18.886848+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:19.887013+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:20.887206+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:21.887382+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:22.887548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:23.887660+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:24.887829+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:25.888001+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:26.888203+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:27.888468+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:28.888738+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:29.888937+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:30.889096+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:31.889279+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:32.889841+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:33.890073+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:34.890218+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:35.890391+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:36.890553+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:37.890764+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:38.890889+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:39.891031+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:40.891161+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:41.891354+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:42.891522+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:43.891656+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:44.891783+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:45.891905+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:46.892030+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:47.892368+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:48.892514+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:49.892656+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:50.892802+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:51.892962+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fcdf94c5a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:52.893081+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:53.893309+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:54.893467+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:55.893645+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:56.893791+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:57.893929+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:58.894060+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:52:59.894236+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:00.894394+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986596 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:01.894631+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:02.894757+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.312339783s of 64.318359375s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:03.895018+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:04.895198+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:05.895367+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:06.895615+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:07.895855+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:08.896066+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:09.896261+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:10.896525+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986728 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:11.896731+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 3637248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:12.896867+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:13.897007+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242930412s of 11.255201340s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:14.897180+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:15.897341+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:16.897603+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:17.897753+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:18.897954+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:19.898123+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:20.898268+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:21.898519+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:22.898668+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:23.898797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:24.898918+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:25.899057+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:26.899241+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:27.899440+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:28.899547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:29.899699+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:30.899854+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:31.900042+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:32.900187+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:33.900375+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:34.900523+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:35.900729+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:36.900917+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:37.901276+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:38.901659+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:39.901946+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:40.902229+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:41.902533+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:42.902802+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:43.903022+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:44.903227+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:45.903468+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:46.903740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:47.903913+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:48.904047+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:49.904188+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1faed20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2116000 session 0x55fce1faf0e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:50.904335+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:51.904542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:52.904706+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:53.904873+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:54.905057+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:55.905241+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986005 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:56.905426+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:57.905628+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:58.905960+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:53:59.906230+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 3629056 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.314544678s of 46.319801331s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:00.906408+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986137 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:01.906752+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:02.906921+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:03.907147+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:04.907378+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:05.907567+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987649 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:06.907704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:07.907864+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:08.908029+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:09.908167+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:10.908404+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989161 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:11.908657+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:12.908822+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 3620864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:13.908980+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.012979507s of 13.046799660s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:14.909129+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:15.909267+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:16.909545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:17.909696+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:18.909831+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:19.910038+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:20.910182+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:21.910376+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:22.910589+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:23.910801+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:24.911011+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:25.911243+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:26.911434+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:27.911613+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:28.911760+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:29.911966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e881e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1117a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:30.912126+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:31.912316+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:32.912456+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:33.912618+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:34.912754+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:35.912931+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989029 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:36.913140+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:37.913328+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:38.913535+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:39.913699+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:40.913855+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf1d9800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.331491470s of 27.335552216s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989161 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:41.914038+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:42.914232+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 3612672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:43.914393+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf3a9000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:44.914539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:45.914704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990673 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:46.914889+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:47.915112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:48.915287+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:49.915513+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:50.915731+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990082 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:51.915947+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:52.916093+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205703735s of 12.221550941s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:53.916329+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:54.916533+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:55.916682+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:56.916827+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:57.916966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:58.917097+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:59.917245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:00.917411+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:01.917618+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:02.917774+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:03.917908+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:04.918073+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:05.918215+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:06.918363+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:07.918525+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:08.918755+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:09.918940+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:10.919104+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:11.919411+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:12.919582+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:13.919762+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:14.919958+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:15.920108+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:16.920276+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:17.920406+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:18.920544+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:19.920703+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:20.920856+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:21.922420+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:22.922595+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:23.922759+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:24.922878+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:25.923082+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:26.923257+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:27.923561+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:28.923743+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:29.923881+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:30.924022+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:31.924207+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e892c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce245f4a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:32.924538+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:33.924790+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:34.924958+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:35.925157+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:36.925394+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:37.925564+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:38.925739+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:39.925881+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:40.926043+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:41.926255+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.344425201s of 49.353523254s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:42.926422+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:43.926623+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:44.926798+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:45.926982+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce22ff860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1ed5680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:46.927119+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:47.927263+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:48.927402+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:49.927548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:50.927723+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:51.927882+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:52.928031+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:53.928172+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:54.928416+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:55.928655+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:56.928839+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083337784s of 14.100935936s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989623 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:57.928965+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:58.929111+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:59.929253+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:00.929534+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:01.929793+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:02.929959+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:03.930120+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:04.930294+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:05.930446+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:06.930844+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:07.930987+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:08.931117+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:09.931215+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:10.931392+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.310416222s of 14.321680069s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:11.931565+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:12.931703+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:13.931932+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:14.932122+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:15.932299+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:16.932470+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:17.932637+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:18.932778+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:19.932923+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:20.933087+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:21.933306+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:22.935367+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:23.935521+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:24.935712+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:25.935862+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:26.936020+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:27.936182+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:28.936337+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:29.936461+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:30.936682+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:31.936876+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:32.937009+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:33.937103+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:34.937242+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:35.937390+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:36.937536+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:37.937757+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:38.937960+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:39.938112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:40.938230+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:41.938392+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:42.938542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:43.938669+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:44.938831+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:45.938982+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:46.939123+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:47.939233+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:48.939371+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:49.939473+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:50.939617+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:51.939815+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:52.939989+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:53.940108+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:54.940233+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:55.940365+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:56.940529+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:57.940677+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:58.940835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:59.940974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:00.941145+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:01.941332+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:02.941562+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:03.941700+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2135800 session 0x55fce1f0ab40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:04.941866+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:05.942005+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:06.942165+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:07.942291+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:08.942551+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:09.942691+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:10.942863+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:11.943077+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:12.943244+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:13.943399+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:14.943597+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf1d9800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.857799530s of 63.862625122s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:15.943835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:16.943974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:17.944147+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:18.944303+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:19.944547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:20.944691+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:21.944902+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:22.945068+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:23.945247+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:24.945409+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:25.945549+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:26.945752+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:27.945964+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:28.946129+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:29.946316+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:30.946513+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.674135208s of 16.811328888s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:31.946677+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:32.946858+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:33.946997+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:34.947136+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:35.947343+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:36.947527+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:37.947668+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:38.947817+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:39.948032+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:40.948207+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:41.948390+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:42.948598+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:43.948781+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:44.948964+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:45.949112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:46.949290+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:47.949515+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:48.949666+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:49.949809+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:50.949977+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:51.950129+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:52.950280+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:53.950428+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:54.950586+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:55.950734+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:56.950905+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce1e64f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f6f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:57.951047+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:58.951197+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:59.951346+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:00.951548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:01.951718+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:02.951941+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:03.952130+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:04.952320+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:05.952457+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:06.952611+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:07.952781+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:08.952941+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:09.953134+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:10.953328+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:11.953597+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:12.953769+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:13.953944+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:14.954175+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:15.954313+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:16.954688+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:17.954847+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:18.955009+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:19.955225+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:20.955732+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:21.955960+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:22.956172+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:23.956335+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:24.956592+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:25.956749+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:26.956915+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:27.957130+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:28.957317+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:29.957542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:30.957755+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:31.958040+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:32.958213+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:33.958352+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:34.958545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:35.958698+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:36.958841+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:37.958984+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:38.959116+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:39.959241+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:40.959379+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:41.959575+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:42.959819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:43.959980+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:44.960137+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:45.960298+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:46.960576+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:47.960734+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:48.960923+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:49.961194+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:50.961404+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:51.961690+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:52.961883+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:53.962021+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:54.962172+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:55.962331+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:56.962509+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:57.962681+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:58.962827+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:59.962993+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:00.963163+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:01.963397+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:02.963602+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:03.963753+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:04.963872+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:05.964006+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:06.964128+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:07.964303+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:08.964447+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:09.964610+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:10.964775+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:11.965036+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:12.965175+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:13.965310+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:14.965450+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212a400 session 0x55fce2304000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2128400 session 0x55fce232eb40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:15.965585+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:16.965701+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:17.965803+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:18.965924+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:19.966062+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:20.966192+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:21.966375+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:22.966538+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:23.966733+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:24.966904+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:25.967079+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 114.850975037s of 114.855049133s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:26.967245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:27.967454+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:28.967676+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:29.967863+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:30.968079+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:31.968297+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:32.968538+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:33.968722+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:34.968893+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:35.969026+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:36.969172+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991924 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:37.969306+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075698853s of 12.083848000s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:38.969555+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:39.969693+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:40.969819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:41.969984+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:42.970164+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:43.970343+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:44.970495+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:45.970682+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:46.970833+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:47.970988+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:48.971174+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:49.971330+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:50.971462+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:51.971674+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:52.971845+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:53.972016+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:54.972162+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:55.972350+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:56.972563+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:57.972699+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:58.972848+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:59.972991+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:00.973118+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:01.973344+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:02.973575+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:03.973720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:04.973846+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:05.973991+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:06.974115+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:07.974256+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:08.974393+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:09.974537+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:10.974691+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:11.974924+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:12.975060+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:13.975206+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:14.975369+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:15.975552+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:16.975681+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:17.975871+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:18.976034+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:19.976235+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:20.976420+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:21.976686+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:22.976843+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:23.977039+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:24.977204+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:25.977368+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:26.977565+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:27.977720+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:28.977851+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:29.978035+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:30.978228+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:31.978424+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:32.978724+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:33.978868+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:34.979049+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:35.979220+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:36.979403+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:37.979550+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:38.979702+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:39.979862+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:40.980061+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:41.980255+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:42.980433+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:43.980664+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:44.980812+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:45.981042+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:46.981235+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:47.981380+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:48.981583+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:49.981756+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:50.981896+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:51.982087+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:52.982259+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:53.982413+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:54.982558+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:55.982729+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:56.982871+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:57.983013+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:58.983187+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:59.983354+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:00.983513+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce23ebe00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce0f85680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:01.983689+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:02.983843+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:03.984018+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:04.984213+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:05.984418+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:06.984583+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:07.984774+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:08.984920+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:09.985101+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:10.985255+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:11.985447+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.896926880s of 93.672317505s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:12.985602+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991333 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:13.985749+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:14.985909+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:15.986050+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:16.986196+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:17.986333+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:18.986545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:19.986737+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:20.986904+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:21.987093+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:22.987261+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:23.987452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.079085350s of 12.086176872s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:24.987653+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:25.987791+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:26.987994+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:27.988187+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:28.988438+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:29.988598+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:30.989057+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 2211840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:31.989226+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:32.989415+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:33.989668+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:34.989851+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:35.989981+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:36.990179+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:37.990251+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:38.990382+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:39.990570+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:40.990730+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:41.990936+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:42.991063+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:43.991264+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:44.991407+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:45.991551+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:46.991729+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:47.991987+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:48.992121+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:49.992302+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:50.992525+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:51.992707+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:52.992839+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:53.992975+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:54.993122+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:55.993269+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:56.993421+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:57.993564+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:58.993725+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:59.993868+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:00.994014+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:01.994257+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:02.994441+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:03.994557+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:04.994696+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:05.994869+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:06.995087+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:07.995264+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:08.995420+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:09.995573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:10.995742+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:11.995983+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:12.996164+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:13.996342+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce0e87c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:14.996539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:15.996707+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:16.996891+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:17.997057+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:18.997165+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:19.997295+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:20.997451+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:21.997695+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:22.997843+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:23.998012+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:24.998154+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.565917969s of 60.611633301s, submitted: 340
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:25.998293+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:26.998422+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:27.998573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:28.998765+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:29.998955+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:30.999072+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:31.999254+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:32.999360+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:33.999548+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:34.999667+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:35.999797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:36.999923+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:38.000099+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993175 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:39.000292+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:40.000440+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.864642143s of 14.892098427s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:41.000631+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:42.000801+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:43.000934+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:44.001098+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:45.001262+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:46.001448+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:47.001612+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:48.001798+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:49.002008+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:50.002165+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:51.002296+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:52.002452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:53.002664+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:54.002871+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:55.003052+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:56.003253+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:57.003397+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:58.003534+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:59.003684+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:00.003853+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:01.004000+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:02.004149+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:03.004320+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:04.004539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:05.004701+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:06.004839+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:07.004974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:08.005159+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:09.005346+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:10.005537+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:11.005690+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:12.005862+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:13.005991+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:14.006146+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:15.006300+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:16.006430+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:17.006549+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:18.006697+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:19.006896+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:20.007037+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:21.007315+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:22.007622+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:23.007819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:24.008002+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:25.008182+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:26.008445+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:27.008705+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:28.008910+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:29.009122+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:30.009393+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:31.009617+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:32.010001+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:33.010215+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:34.010409+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:35.010588+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:36.010740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:37.010901+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:38.011090+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:39.011316+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:40.011499+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:41.011774+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:42.012090+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:43.012332+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:44.012646+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:45.012921+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:46.013186+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:47.013460+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:48.013703+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:49.013968+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:50.014245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:51.014639+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:52.014936+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:53.015171+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:54.015396+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:55.015645+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:56.015907+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:57.016234+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:58.016602+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:59.016887+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:00.017247+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:01.017510+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:02.017733+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:03.017945+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:04.018161+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:05.018369+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:06.018616+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:07.018858+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:08.019155+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:09.019387+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:10.019606+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:11.019828+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:12.020080+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:13.020301+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 93.336196899s of 93.339126587s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996809 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:14.020542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:15.020820+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 18792448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 148 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1c2b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:16.021151+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 18784256 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [1])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:17.021331+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 27164672 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:18.021457+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 27156480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1175832/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 150 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115338 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:19.021579+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:20.021725+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:21.021863+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x117793a/0x1239000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:22.022035+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:23.022159+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:24.022297+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:25.022570+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:26.022729+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:27.022878+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:28.023010+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:29.023202+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:30.023372+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:31.023550+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:32.023818+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:33.023974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:34.024165+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:35.024335+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:36.024494+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb9e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2116000 session 0x55fce1c6cb40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:37.024646+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:38.024782+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:39.024966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:40.025177+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbac1e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:41.025383+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:42.025596+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:43.025750+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:44.025903+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:45.026138+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:46.026319+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.056060791s of 33.534233093s, submitted: 52
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:47.026527+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:48.026688+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:49.026853+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117304 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:50.027026+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:51.027205+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:52.027434+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:53.027671+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:54.027825+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:55.028020+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:56.028277+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:57.028468+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:58.028676+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:59.028832+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.519258499s of 12.529978752s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:00.028978+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:01.029112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:02.029271+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:03.029422+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce212f800 session 0x55fce2305a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2136000 session 0x55fce0f843c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8b400 session 0x55fce236e000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:04.029654+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120409 data_alloc: 218103808 data_used: 270336
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e9ba40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:05.029851+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 18022400 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2069400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2069400 session 0x55fce23052c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:06.030062+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 18038784 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23c0b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdf19e5a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c1c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:07.030200+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce0f87860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88400 session 0x55fcdfbb63c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23bda40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:08.030330+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:09.030500+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177895 data_alloc: 218103808 data_used: 7086080
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fce112d680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1fc000/0x0/0x4ffc00000, data 0x1547bbd/0x160f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:10.030645+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c01e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.694509506s of 10.920597076s, submitted: 65
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce1e9a960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:11.030810+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f86c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:12.031001+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:13.031144+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:14.031285+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:15.031427+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:16.031564+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:17.032250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1e0b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce22ff4a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:18.032526+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:19.032728+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:20.033505+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:21.033666+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:22.033864+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:23.034014+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.842867851s of 12.866385460s, submitted: 19
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 12738560 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:24.034186+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239575 data_alloc: 234881024 data_used: 11247616
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 12148736 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:25.034315+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:26.034532+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:27.034685+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:28.034887+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:29.035042+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243807 data_alloc: 234881024 data_used: 11247616
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:30.035189+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:31.035324+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:32.035523+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:33.035711+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411628723s of 10.562047958s, submitted: 38
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:34.035992+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247415 data_alloc: 234881024 data_used: 11251712
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:35.036202+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:36.036398+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:37.036556+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:38.036686+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:39.036836+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246824 data_alloc: 234881024 data_used: 11251712
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:40.036988+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:41.037222+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:42.037445+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:43.037608+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:44.037755+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246101 data_alloc: 234881024 data_used: 11251712
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212f800 session 0x55fce112c3c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:45.037948+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 11771904 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2131800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780336380s of 11.795572281s, submitted: 4
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2131800 session 0x55fce0f852c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136c00 session 0x55fce1e9be00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f86400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86400 session 0x55fce0f803c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:46.038108+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1a01a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d800 session 0x55fcdeddcf00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:47.038299+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:48.038437+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:49.038630+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312002 data_alloc: 234881024 data_used: 11780096
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:50.038797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce1f0b2c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:51.038981+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:52.039187+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce245d4a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:53.039346+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:54.039525+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1efd800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efd800 session 0x55fce0f841e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1e0f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314613 data_alloc: 234881024 data_used: 11780096
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:55.039676+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:56.039803+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2572288 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:57.039971+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:58.040104+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:59.040341+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:00.040526+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:01.040696+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:02.040849+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:03.040995+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:04.041136+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:05.041281+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce1c6de00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e64000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:06.041421+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.690454483s of 20.819118500s, submitted: 30
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 958464 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:07.041591+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114335744 unmapped: 3080192 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:08.041711+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 2711552 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:09.041895+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418941 data_alloc: 234881024 data_used: 19755008
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:10.042069+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:11.042245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:12.042457+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:13.042639+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:14.042833+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417293 data_alloc: 234881024 data_used: 19755008
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:15.042966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:16.043107+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:17.043271+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.662245750s of 10.842704773s, submitted: 64
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce0f80f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce19c1a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:18.043387+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce23bc3c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:19.043635+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258778 data_alloc: 234881024 data_used: 11780096
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce1e65680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:20.043792+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:21.043876+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:22.044109+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:23.044268+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf19fe00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86c00 session 0x55fce1c6d2c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 8028160 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:24.044436+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2305860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:25.044565+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:26.044794+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:27.044979+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:28.045132+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:29.045347+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:30.045576+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.095330238s of 13.319671631s, submitted: 68
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:31.045704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:32.045890+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:33.046040+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:34.046187+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172284 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:35.046367+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:36.046507+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:37.046619+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:38.046793+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:39.046942+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:40.047101+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:41.047261+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:42.047473+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:43.047707+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:44.047878+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:45.048066+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:46.048233+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733273506s of 15.748806000s, submitted: 4
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:47.048361+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:48.048615+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:49.048763+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171561 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:50.048939+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1f0a1e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1f0a5a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1f0b0e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdeddd860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce19ff680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:51.049090+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:52.049333+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:53.049470+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfbb7680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:54.049643+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260916 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fcdf1c2f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:55.049793+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:56.049938+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb6d20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1dad20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.256204605s of 10.393723488s, submitted: 26
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:57.050099+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:58.050222+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:59.050377+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:00.050521+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:01.050713+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:02.050877+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:03.051030+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:04.051168+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:05.051315+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:06.051461+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:07.051655+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:08.051831+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.100857735s of 12.104346275s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:09.051953+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407304 data_alloc: 234881024 data_used: 19488768
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20488192 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:10.052262+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9033000/0x0/0x4ffc00000, data 0x2572b1d/0x2639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:11.052395+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:12.052575+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:13.052762+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:14.052956+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415702 data_alloc: 234881024 data_used: 19476480
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 18857984 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:15.053122+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:16.053271+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:17.053409+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:18.053525+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:19.053663+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414798 data_alloc: 234881024 data_used: 19476480
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:20.053882+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:21.054072+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115612030s of 12.360255241s, submitted: 80
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:22.054260+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:23.054418+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:24.054606+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414878 data_alloc: 234881024 data_used: 19476480
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:25.054796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:26.054985+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:27.055152+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:28.055260+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:29.055454+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414966 data_alloc: 234881024 data_used: 19476480
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:30.055571+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:31.055744+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 18628608 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:32.056143+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:33.056327+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:34.056537+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.287870407s of 13.304501534s, submitted: 4
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415814 data_alloc: 234881024 data_used: 19484672
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1d6960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfe7ed20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 18407424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:35.056778+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19fe000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:36.059286+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:37.059424+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:38.059578+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:39.059713+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:40.059843+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:41.059972+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:42.060139+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:43.060286+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:44.060654+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:45.060796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:46.060936+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:47.061077+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:48.061244+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:49.061411+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:50.061588+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:51.061722+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:52.061932+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:53.062130+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:54.062304+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:55.062545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:56.062775+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:57.062974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:58.063135+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:59.063356+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:00.063547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:01.063704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:02.063908+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:03.064061+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.696674347s of 28.839307785s, submitted: 37
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce232f0e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1c2000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfe7f4a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2101c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce20f7680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:04.064353+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191346 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:05.064509+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:06.064786+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce20f70e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f90000/0x0/0x4ffc00000, data 0x1205b1d/0x12cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:07.065259+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6780
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:08.065443+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:09.065636+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193160 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f74a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:10.065791+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce210ad20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce23ea960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:11.065966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:12.066134+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:13.066326+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:14.066591+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:15.066725+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:16.066892+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:17.067022+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:18.067211+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:19.067614+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:20.067764+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.855402946s of 17.599184036s, submitted: 5
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:21.067894+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:22.068104+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:23.068292+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 25788416 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:24.068452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:25.068759+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:26.069006+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:27.069195+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:28.069390+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:29.069586+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:30.069856+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:31.070042+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:32.070428+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634870529s of 11.777306557s, submitted: 33
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:33.070611+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:34.070797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:35.070957+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:36.071144+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:37.071305+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:38.071456+0000)
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.26587 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.25670 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2166791389' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:39.071638+0000)
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.26605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2884993703' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.25679 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.17319 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:40.071785+0000)
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4036683984' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2906548466' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1048400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1048400 session 0x55fcdfeab4a0
Dec 06 10:17:30 compute-0 ceph-mon[74327]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d6000
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.26626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d1860
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.25694 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1575270102' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:41.071929+0000)
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19fc780
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1483867891' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3684677759' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdff170e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:42.072100+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:43.072270+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:44.072517+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256818 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:45.072704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdf1e0d20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:46.072859+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d63c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d7680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.944304466s of 14.069879532s, submitted: 42
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff16b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:47.073027+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:48.073227+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:49.073395+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:50.073589+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:51.073718+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:52.073887+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:53.074059+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:54.074249+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:55.074467+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:56.074722+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:57.074863+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:58.075055+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.114115715s of 12.152852058s, submitted: 12
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:59.075241+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 18767872 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:00.075421+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387114 data_alloc: 234881024 data_used: 12939264
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 17719296 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:01.075636+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:02.075820+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:03.076043+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:04.076245+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1d05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:05.076607+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401348 data_alloc: 234881024 data_used: 13160448
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 17219584 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:06.076799+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c3f000/0x0/0x4ffc00000, data 0x2552bc2/0x261d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:07.076967+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s
                                           Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:08.077131+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdff17860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdfbb7680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:09.077292+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.462458611s of 10.115522385s, submitted: 125
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:10.077419+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239349 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce0f87e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:11.077565+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:12.077735+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:13.077935+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:14.078163+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:15.078321+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238137 data_alloc: 218103808 data_used: 8298496
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6d2c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfbb9860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:16.078473+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdfbb9e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:17.078652+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:18.078835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:19.079014+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:20.079254+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206500 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:21.079436+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026257515s of 12.886064529s, submitted: 81
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:22.079696+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:23.079853+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:24.080030+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:25.080205+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207421 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:26.080362+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:27.080659+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:28.080830+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:29.081060+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:30.081268+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:31.081452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:32.081714+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:33.081884+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:34.082090+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:35.082309+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:36.082573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:37.082898+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:38.083054+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:39.083248+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:40.083549+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:41.083705+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:42.083988+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:43.084156+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:44.084413+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce19c14a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdedddc20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce19fc3c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:45.084560+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.002244949s of 23.143316269s, submitted: 3
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215687 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fdc20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1dab40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c03c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1e0960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff5c1e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:46.084783+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:47.084980+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:48.085161+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:49.085350+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1a005a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:50.085587+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218937 data_alloc: 218103808 data_used: 7618560
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:51.085767+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:52.086033+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:53.086188+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:54.086371+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:55.086631+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:56.086796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:57.086960+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:58.087123+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:59.087266+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:00.087423+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:01.087557+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 22347776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:02.087771+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.405853271s of 17.451101303s, submitted: 13
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 18194432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:03.087970+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 19800064 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:04.088127+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:05.088341+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308431 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:06.088526+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:07.088650+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:08.088796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:09.089006+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:10.089250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:11.089374+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:12.089560+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:13.089756+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:14.089933+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:15.090140+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.168425560s of 13.361434937s, submitted: 78
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:16.090339+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:17.090572+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:18.090701+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:19.090887+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:20.091115+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307855 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:21.091272+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:22.091535+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:23.091662+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9472000/0x0/0x4ffc00000, data 0x1d21b50/0x1dea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:24.091740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:25.091992+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307695 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:26.092179+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:27.092398+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022357941s of 12.031913757s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:28.092553+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:29.092746+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:30.092902+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307703 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:31.093023+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce11130e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fd4a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdfea7e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfea6960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdfeeac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdfeeac00 session 0x55fce19c05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:32.093239+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbb79/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:33.093655+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbbb2/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:34.093858+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:35.094078+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372406 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:36.094293+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19c03c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:37.094451+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:38.094584+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:39.094769+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:40.095127+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:41.095341+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:42.095560+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:43.095767+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:44.095994+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:45.096153+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.528182983s of 17.646516800s, submitted: 39
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:46.096322+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:47.096465+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:48.096668+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd3000/0x0/0x4ffc00000, data 0x24bdbb2/0x2587000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 19562496 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:49.096846+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 17809408 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:50.097754+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484588 data_alloc: 234881024 data_used: 15458304
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:51.097931+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:52.098135+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:53.098269+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:54.098420+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb5bb2/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:55.098551+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485044 data_alloc: 234881024 data_used: 15536128
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.188508034s of 10.414656639s, submitted: 108
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:56.098974+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:57.099363+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:58.099507+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:59.099731+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:00.099923+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479996 data_alloc: 234881024 data_used: 15540224
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:01.100114+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 17063936 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:02.100336+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce1a012c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 17055744 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:03.100458+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce210ad20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:04.100600+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:05.100729+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320544 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:06.100864+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:07.100988+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276282310s of 11.366744995s, submitted: 33
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:08.101114+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:09.101239+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:10.101369+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320712 data_alloc: 218103808 data_used: 8380416
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:11.101586+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:12.101841+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:13.101967+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fcdfb2d0e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce21010e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:14.102092+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6c3c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb40/0x1247000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:15.102281+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:16.102448+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:17.103323+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:18.103460+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:19.103648+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:20.103797+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:21.103920+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:22.104072+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:23.104615+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:24.104746+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:25.104927+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:26.126024+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:27.126213+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:28.126392+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:29.126563+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:30.126832+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.033533096s of 23.158624649s, submitted: 40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:31.127206+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 21397504 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:32.127384+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 21274624 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1,0,1])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1a010e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1116000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:33.127673+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:34.127865+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:35.128011+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:36.128228+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:37.128545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:38.128723+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:39.128835+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 17793024 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdf1d6b40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19fc780
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce23043c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce1e64780
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1e65a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:40.129034+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286739 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:41.129201+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:42.129646+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:43.129810+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.803172112s of 13.002218246s, submitted: 386
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:44.129966+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:45.130133+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdeddda40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287696 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:46.130271+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 21045248 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:47.130433+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 19922944 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:48.130612+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:49.130800+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:50.130998+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335992 data_alloc: 234881024 data_used: 14737408
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:51.131137+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:52.131292+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce19fef00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce23bc1e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:53.131416+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce210a5a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:54.131948+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:55.132106+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232510 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:56.132308+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:57.132436+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:58.132652+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:59.132823+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:00.132980+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.927818298s of 17.092643738s, submitted: 51
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:01.133113+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:02.133278+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:03.133408+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:04.133631+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:05.133827+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce20f72c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:06.134025+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c60c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce23050e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c60c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce20f6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 24641536 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfb2dc20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce0f863c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:07.134222+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 24633344 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:08.134378+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:09.134546+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:10.134698+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce23c14a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325714 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:11.134821+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.299762726s of 10.467995644s, submitted: 46
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:12.134980+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 23732224 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce23bc000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1dab40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:13.135150+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 21028864 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fde00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:14.135268+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:15.135371+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:16.135470+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:17.135632+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:18.135870+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:19.136200+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:20.136540+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:21.136796+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:22.137089+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:23.137345+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:24.137602+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:25.137864+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:26.138092+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:27.138305+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:28.138453+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:29.138692+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:30.138911+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:31.139138+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:32.139396+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:33.139554+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:34.139755+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:35.139927+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:36.140097+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:37.140256+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:38.140403+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:39.140539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112c000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23bd0e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce112d680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce0f87680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.559396744s of 28.672395706s, submitted: 41
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c1c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fcdeddc5a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce0f86000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce1fae960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:40.140687+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce0e89a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291217 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:41.140839+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:42.141040+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b35000/0x0/0x4ffc00000, data 0x1660b1d/0x1727000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:43.141304+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:44.141529+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce2101c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:45.141675+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce0f865a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23eaf00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112dc20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293031 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:46.141801+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:47.141975+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 25018368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:48.142158+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:49.142293+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:50.142497+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:51.142677+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:52.142955+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:53.143112+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce1e650e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1e652c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:54.143362+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:55.143537+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60800 session 0x55fcdf1c3c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bdc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:56.143702+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:57.143963+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: mgrc ms_handle_reset ms_handle_reset con 0x55fcdfeeb800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3885409716
Dec 06 10:17:30 compute-0 ceph-osd[82803]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3885409716,v1:192.168.122.100:6801/3885409716]
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: get_auth_request con 0x55fce1f1e000 auth_method 0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f6f400 session 0x55fce245f680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf3a9000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:58.144207+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.341133118s of 18.417297363s, submitted: 30
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 16695296 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e10e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:59.144357+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 18685952 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:00.144542+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:01.144669+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423799 data_alloc: 234881024 data_used: 11702272
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:02.144868+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:03.144989+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:04.145149+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:05.145291+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:06.145449+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420867 data_alloc: 234881024 data_used: 11702272
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:07.145562+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:08.145747+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:09.145920+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2137800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.104361534s of 11.437482834s, submitted: 145
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8db8000/0x0/0x4ffc00000, data 0x23dcb2d/0x24a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:10.146134+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:11.146347+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423207 data_alloc: 234881024 data_used: 11714560
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:12.146567+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb8960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1e1c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce232ed20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:13.146757+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:14.146965+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:15.147140+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98f2000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:16.147326+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256266 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:17.147526+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:18.147655+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:19.147816+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:20.147943+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:21.148081+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257062 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:22.148237+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.118231773s of 13.227775574s, submitted: 42
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:23.148449+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:24.148643+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:25.148826+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:26.148982+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:27.149178+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:28.149331+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:29.149500+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:30.149707+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:31.149936+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:32.150173+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:33.150337+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:34.150605+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:35.150840+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0f863c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f872c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f87e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.562622070s of 12.571432114s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 20856832 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f86d20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce20f72c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19fef00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce236ef00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c14a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:36.151048+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317200 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:37.151253+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988c000/0x0/0x4ffc00000, data 0x1907b8f/0x19d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:38.151452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:39.151675+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:40.151842+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:41.151990+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319610 data_alloc: 218103808 data_used: 7618560
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdff17c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:42.152154+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:43.152367+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:44.152545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:45.152689+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:46.152839+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:47.152989+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:48.153118+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:49.153250+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:50.153418+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:51.153554+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1112d20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce210a960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce2101860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:52.153713+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86d20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.591188431s of 16.790163040s, submitted: 37
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fcdfbb9e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce20f6f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1efcc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efcc00 session 0x55fce1e64960
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf19fe00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdf19f680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 18300928 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:53.153903+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:54.154033+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:55.154169+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:56.154339+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515640 data_alloc: 234881024 data_used: 15814656
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8312000/0x0/0x4ffc00000, data 0x2a61bc1/0x2b2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:57.154598+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f86f00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212e800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c56000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129261568 unmapped: 11919360 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:58.154761+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129269760 unmapped: 11911168 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:59.154877+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 9068544 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:00.155056+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 8937472 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:01.155219+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543820 data_alloc: 234881024 data_used: 20639744
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:02.155595+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:03.155758+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:04.155936+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:05.156144+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:06.156446+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543229 data_alloc: 234881024 data_used: 20639744
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:07.156695+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:08.156847+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:09.157069+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.123311996s of 17.423311234s, submitted: 113
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 7577600 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:10.157208+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 7888896 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:11.157388+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 8241152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:12.157595+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:13.157759+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:14.157967+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:15.158150+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:16.158352+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:17.158532+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:18.158712+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:19.158894+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:20.159034+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:21.159197+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1606903 data_alloc: 234881024 data_used: 20910080
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133021696 unmapped: 8159232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:22.159407+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.647055626s of 12.847999573s, submitted: 62
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:23.159514+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:24.159679+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:25.159838+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdfea7e00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c56000 session 0x55fce19fc3c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 8085504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:26.159984+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d63c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:27.160170+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:28.160318+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:29.160569+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:30.160768+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:31.160907+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:32.161081+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:33.161237+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.591730118s of 10.648483276s, submitted: 22
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb7a40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce1c6d2c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdeddd860
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:34.161373+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:35.161547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:36.161678+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:37.161818+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:38.161947+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:39.162077+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:40.162227+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:41.162362+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:42.162573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:43.162723+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:44.162876+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:45.163002+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:46.163117+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:47.163230+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:48.163364+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:49.163547+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:50.163665+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:51.163823+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:52.164029+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:53.164205+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:54.164375+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:55.164554+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:56.164704+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:57.164799+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:58.164986+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:59.165130+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.378582001s of 26.526098251s, submitted: 53
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c05a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1a01c20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcde5783c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce112dc20
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce20f7680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:00.165262+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:01.165388+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323411 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:02.165573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:03.165706+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1e652c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:04.165862+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1e650e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:05.166005+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce19c03c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1d6000
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:06.166268+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326386 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212e800
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:07.166419+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 20709376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:08.166618+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:09.166778+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:10.166927+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:11.167089+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:12.167311+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:13.167539+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:14.167670+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:15.167801+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:16.803934+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:17.804106+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411964417s of 18.474147797s, submitted: 12
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:18.804421+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:19.804580+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91f5000/0x0/0x4ffc00000, data 0x1b8fb2d/0x1c57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:20.804715+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404590 data_alloc: 234881024 data_used: 13590528
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:21.804903+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:22.805046+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:23.805449+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:24.805584+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:25.805741+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:26.805880+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:27.805999+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:28.806195+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:29.806331+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:30.806508+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:31.806695+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:32.806915+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdfea65a0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.284329414s of 15.530404091s, submitted: 41
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdeddcb40
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:33.807061+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 23224320 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d1680
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:34.807208+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:35.807385+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:36.807555+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:37.807756+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:38.807934+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:39.808186+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:40.808322+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:41.808474+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:42.808686+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2137800 session 0x55fce23bd2c0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:43.808822+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:44.809098+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:45.809295+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:46.809442+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:47.809621+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:48.809819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:49.810056+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:50.810276+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:51.810465+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:52.810713+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:53.810932+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:54.811086+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:55.811248+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:56.811378+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:57.811544+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:58.811675+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:59.811874+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:00.812036+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:01.812195+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:02.812360+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:03.812573+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:04.812740+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:05.812891+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce112cf00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f70e0
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:06.813055+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:07.813226+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:08.813365+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.873546600s of 35.962779999s, submitted: 29
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:09.813523+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:10.813658+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:11.813819+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:12.813984+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:13.814162+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:14.814370+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:15.814530+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:16.814698+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:17.814875+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:18.815026+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:19.815192+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:20.815322+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:21.815462+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:22.815677+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:23.815865+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:24.816022+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:25.816164+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:26.816335+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:27.816570+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:28.816715+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:29.816929+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe780
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:30.817278+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.201023102s of 21.206556320s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:31.817587+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:32.817770+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:33.817912+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:34.818056+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:35.818173+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:36.818381+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:37.818553+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:38.818730+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:39.818923+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:40.819103+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126986504s of 10.132149696s, submitted: 1
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:41.819278+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:42.819452+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:43.819604+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:44.819726+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:45.819870+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:46.820028+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:47.820179+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:48.820382+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:49.820584+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:50.820790+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:51.821010+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:52.821146+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:53.821288+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:54.821447+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:55.821545+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:56.821670+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:17:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 10:17:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.665910721s of 16.765491486s, submitted: 2
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:57.821770+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 23117824 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:58.821887+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 23052288 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:17:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:59.822033+0000)
Dec 06 10:17:30 compute-0 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25712 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17364 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:17:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:30 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:17:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:17:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474324905' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:17:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/154680080' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.25712 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/96602646' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2197448062' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1967292708' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.26671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.17364 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.25727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1474324905' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4105181392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2642598236' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17412 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25751 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:31 compute-0 crontab[282369]: (root) LIST (root)
Dec 06 10:17:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 06 10:17:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911995297' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26737 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 nova_compute[254819]: 2025-12-06 10:17:31.998 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:32.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25763 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:32.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17454 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.26698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.25742 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/154680080' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/519140837' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1553814133' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.26722 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.17412 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.25751 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1911995297' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2058821366' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.26737 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4055719220' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25775 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26767 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17475 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 06 10:17:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547471679' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:17:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25787 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 06 10:17:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209183984' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17487 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25808 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.25763 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.17454 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.25775 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.26767 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.17475 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1749799299' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/547471679' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1815850061' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.25787 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4209183984' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4191974410' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2079500196' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3149206455' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 06 10:17:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062865665' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 06 10:17:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3270305602' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 06 10:17:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416495288' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:17:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 06 10:17:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117394827' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:17:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:34 compute-0 nova_compute[254819]: 2025-12-06 10:17:34.061 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 06 10:17:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619943755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 06 10:17:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152843420' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.17487 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.25808 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3062865665' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1400424366' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3270305602' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2924573047' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/416495288' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2910934850' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/117394827' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/916825396' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3574515276' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1996211133' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/619943755' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/745328308' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4152843420' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3824839772' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2022830260' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3333722021' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:17:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 06 10:17:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882127279' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:17:34 compute-0 sudo[282749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:17:34 compute-0 sudo[282749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:34 compute-0 sudo[282749]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 06 10:17:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564210752' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 06 10:17:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265932852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:17:35 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 10:17:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 06 10:17:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963798927' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:17:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486939175' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2982586928' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3551154037' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4038371396' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2882127279' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/508111638' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3789003523' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1564210752' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2990078285' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4265932852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/487764021' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2063758053' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1968791473' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/963798927' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3486939175' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4113509356' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:17:35 compute-0 systemd[1]: Starting Hostname Service...
Dec 06 10:17:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 06 10:17:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474186761' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:17:35 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26941 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:36 compute-0 systemd[1]: Started Hostname Service.
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26947 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:17:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17619 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26959 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 06 10:17:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260105720' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/394227616' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1457285215' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/894360360' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1474186761' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2789134809' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1794527057' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/987939842' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1260105720' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25940 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26989 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17634 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26986 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 nova_compute[254819]: 2025-12-06 10:17:36.999 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25955 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25961 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27001 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17655 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25979 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27025 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.26947 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.17619 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.26959 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.26971 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/558961422' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.25940 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.26989 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.17634 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.26986 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2804408455' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:37.680Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:17:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 06 10:17:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2917337549' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27043 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:38.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.25994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979905628' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17694 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26015 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17721 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083141842' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.25955 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.25961 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.27001 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.17655 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.25979 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.27025 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2453137415' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2917337549' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.17670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.27043 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.25994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2629627461' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2392546310' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/979905628' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3607230830' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26027 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:17:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:39 compute-0 nova_compute[254819]: 2025-12-06 10:17:39.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17745 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/966773222' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26045 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.17694 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.17700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.26015 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2453809493' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.17721 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4083141842' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.26027 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1616168784' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/966773222' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2688295501' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:40 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27187 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 06 10:17:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131632034' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.17745 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.26045 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1975698630' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4131632034' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:17:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2870637964' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.832853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260832933, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 251, "total_data_size": 4310560, "memory_usage": 4364080, "flush_reason": "Manual Compaction"}
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260864989, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4202135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29600, "largest_seqno": 31957, "table_properties": {"data_size": 4191058, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26531, "raw_average_key_size": 21, "raw_value_size": 4167766, "raw_average_value_size": 3421, "num_data_blocks": 296, "num_entries": 1218, "num_filter_entries": 1218, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016059, "oldest_key_time": 1765016059, "file_creation_time": 1765016260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 32321 microseconds, and 9439 cpu microseconds.
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.865178) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4202135 bytes OK
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.865249) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867646) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867663) EVENT_LOG_v1 {"time_micros": 1765016260867658, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.867690) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4300187, prev total WAL file size 4300187, number of live WAL files 2.
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.869294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4103KB)], [65(12MB)]
Dec 06 10:17:40 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016260869364, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 17353273, "oldest_snapshot_seqno": -1}
Dec 06 10:17:40 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17835 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:17:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6527 keys, 15136219 bytes, temperature: kUnknown
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261007341, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 15136219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15092775, "index_size": 26054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167457, "raw_average_key_size": 25, "raw_value_size": 14975382, "raw_average_value_size": 2294, "num_data_blocks": 1047, "num_entries": 6527, "num_filter_entries": 6527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.007581) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 15136219 bytes
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.010307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.7 rd, 109.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.5 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7048, records dropped: 521 output_compression: NoCompression
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.010329) EVENT_LOG_v1 {"time_micros": 1765016261010320, "job": 36, "event": "compaction_finished", "compaction_time_micros": 138029, "compaction_time_cpu_micros": 29891, "output_level": 6, "num_output_files": 1, "total_output_size": 15136219, "num_input_records": 7048, "num_output_records": 6527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261011220, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016261016417, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:40.869160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:17:41.016549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 06 10:17:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595427889' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26150 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 06 10:17:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895847428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.27187 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.17835 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1892473375' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2109662307' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2595427889' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3040169991' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1895847428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:17:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:42 compute-0 nova_compute[254819]: 2025-12-06 10:17:42.002 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:42.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:42.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 06 10:17:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756315161' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 06 10:17:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991843052' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.26150 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1044876254' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2211086827' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1756315161' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3138446401' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:17:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1991843052' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17889 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 06 10:17:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121329675' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27307 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26192 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:17:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7075 writes, 31K keys, 7074 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7075 writes, 7074 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1558 writes, 6972 keys, 1558 commit groups, 1.0 writes per commit group, ingest: 11.87 MB, 0.02 MB/s
                                           Interval WAL: 1558 writes, 1558 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.2      0.56              0.14        18    0.031       0      0       0.0       0.0
                                             L6      1/0   14.44 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.5    103.8     89.7      2.54              0.63        17    0.150     94K   9354       0.0       0.0
                                            Sum      1/0   14.44 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.5     85.1     89.8      3.10              0.77        35    0.089     94K   9354       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    111.8    114.6      0.61              0.19         8    0.077     26K   2592       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    103.8     89.7      2.54              0.63        17    0.150     94K   9354       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.2      0.55              0.14        17    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.049, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 3.1 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 22.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000167 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1400,22.24 MB,7.31473%) FilterBlock(36,275.30 KB,0.0884357%) IndexBlock(36,484.64 KB,0.155685%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.27268 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1694569205' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1645793006' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1443159653' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/539913026' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2121329675' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:17:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 06 10:17:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968187861' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:17:44 compute-0 nova_compute[254819]: 2025-12-06 10:17:44.069 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:17:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:44.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:17:44 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17922 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27334 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 06 10:17:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259075150' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.17889 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.27307 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.26192 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2968187861' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1515055382' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1000968456' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/22305521' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/259075150' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 10:17:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17943 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17961 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 06 10:17:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3502764016' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 10:17:45 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26228 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 06 10:17:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531041810' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 10:17:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:46 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26240 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 06 10:17:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349497416' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 10:17:46 compute-0 podman[284673]: 2025-12-06 10:17:46.459315711 +0000 UTC m=+0.088309246 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:17:46 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17988 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.004 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.17922 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.27334 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2038668301' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/917006680' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3502764016' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.17994 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ovs-appctl[285054]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 10:17:47 compute-0 ovs-appctl[285071]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 10:17:47 compute-0 ovs-appctl[285077]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27397 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 06 10:17:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610447336' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:47.683Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:17:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:47.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.874 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.875 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:17:47 compute-0 nova_compute[254819]: 2025-12-06 10:17:47.875 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:17:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:47 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 06 10:17:47 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586252455' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26261 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.26210 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.17943 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.17961 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.26228 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1531041810' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.26240 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2349497416' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1990391975' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.17988 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2700179710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2700179710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.17994 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.27397 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3610447336' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2607835880' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3586252455' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.26261 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/950858382' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:17:48 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740259575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.347 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18036 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.516 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.517 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.518 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.518 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.635 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.635 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:17:48 compute-0 nova_compute[254819]: 2025-12-06 10:17:48.657 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27448 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:48 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.072 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:17:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160220492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.169 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.175 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.207 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.208 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:17:49 compute-0 nova_compute[254819]: 2025-12-06 10:17:49.208 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1740259575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.18036 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.26267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/820439724' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.27448 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.18045 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1455478023' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1693304027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4160220492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 06 10:17:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242421932' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 06 10:17:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876669399' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 10:17:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:50 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 06 10:17:50 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739045502' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3508033125' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/242421932' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1985334892' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.26303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3766028525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2876669399' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1751496017' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/739045502' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18099 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27514 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:17:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:17:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:17:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 06 10:17:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494036148' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 podman[286523]: 2025-12-06 10:17:51.364385175 +0000 UTC m=+0.099199062 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.26315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2380510369' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.18099 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2127390853' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.27514 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3494036148' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3232134178' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2132475846' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 06 10:17:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484183676' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 06 10:17:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1987338407' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 10:17:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:51 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26348 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 nova_compute[254819]: 2025-12-06 10:17:52.006 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 06 10:17:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130199360' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3949318305' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3484183676' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1987338407' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.26348 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/130199360' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2505701622' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 06 10:17:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860231069' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18141 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:52 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27574 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.209 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.210 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.210 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.211 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.228 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.229 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.229 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 06 10:17:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626240640' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2596899974' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/860231069' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2093660645' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.18141 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/562050907' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.27574 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1626240640' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1958371454' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:53 compute-0 nova_compute[254819]: 2025-12-06 10:17:53.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:17:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 06 10:17:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/48299577' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:17:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:54 compute-0 nova_compute[254819]: 2025-12-06 10:17:54.076 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:54.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.247 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:17:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27604 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18180 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 06 10:17:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211427813' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 sudo[286855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:17:54 compute-0 sudo[286855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:17:54 compute-0 sudo[286855]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2941709570' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/762392159' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/48299577' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1855919859' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18198 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27631 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18210 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 nova_compute[254819]: 2025-12-06 10:17:55.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 06 10:17:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626797575' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.27604 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.18180 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.26393 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1211427813' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2059299300' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1663530059' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2704565734' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2626797575' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:56.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 06 10:17:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812042877' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 06 10:17:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/443912516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26441 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18255 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27694 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.18198 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.27631 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.18210 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.26423 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/34457584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4020160728' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/897653303' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/812042877' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/443912516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3394423495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:17:57 compute-0 nova_compute[254819]: 2025-12-06 10:17:57.010 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26453 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27703 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:57 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 06 10:17:57 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210231069' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 podman[287094]: 2025-12-06 10:17:57.572218669 +0000 UTC m=+0.065699460 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 10:17:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:57.685Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:17:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:17:57.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:17:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.26441 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.18255 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.27694 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.18267 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.26453 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1210231069' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3855571438' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3263841297' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3069230919' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2354413920' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3483261157' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:17:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:17:58.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26480 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:17:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:17:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:17:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27736 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26486 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27748 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:58 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 10:17:59 compute-0 ceph-mon[74327]: from='client.27703 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:59 compute-0 ceph-mon[74327]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:17:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2207639809' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:17:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 06 10:17:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242664383' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:59 compute-0 nova_compute[254819]: 2025-12-06 10:17:59.080 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 compute-0 systemd[1]: Starting Time & Date Service...
Dec 06 10:17:59 compute-0 systemd[1]: Started Time & Date Service.
Dec 06 10:17:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 06 10:17:59 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346786115' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 10:17:59 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26507 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:17:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:17:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.26480 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.18303 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.27736 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.26486 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.18315 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.27748 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4242664383' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2060160573' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1008317242' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/346786115' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2859090236' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:00 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26513 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:18:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:18:01 compute-0 ceph-mon[74327]: from='client.26507 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:01 compute-0 ceph-mon[74327]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:01 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2167113296' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:02 compute-0 nova_compute[254819]: 2025-12-06 10:18:02.011 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:02 compute-0 ceph-mon[74327]: from='client.26513 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:18:02 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2123486085' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 10:18:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:03 compute-0 ceph-mon[74327]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:04 compute-0 nova_compute[254819]: 2025-12-06 10:18:04.084 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:04 compute-0 ceph-mon[74327]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:07 compute-0 nova_compute[254819]: 2025-12-06 10:18:07.012 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:07 compute-0 ceph-mon[74327]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:07.685Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:18:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:07.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:08 compute-0 ceph-mon[74327]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 10:18:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 10:18:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:18:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:09 compute-0 nova_compute[254819]: 2025-12-06 10:18:09.087 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:10 compute-0 ceph-mon[74327]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:18:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:18:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:12 compute-0 nova_compute[254819]: 2025-12-06 10:18:12.016 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:13 compute-0 ceph-mon[74327]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:14 compute-0 nova_compute[254819]: 2025-12-06 10:18:14.090 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:15 compute-0 sudo[287664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:18:15 compute-0 sudo[287664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:15 compute-0 sudo[287664]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:15 compute-0 ceph-mon[74327]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:16 compute-0 ceph-mon[74327]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:17 compute-0 nova_compute[254819]: 2025-12-06 10:18:17.017 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:17 compute-0 podman[287689]: 2025-12-06 10:18:17.432547248 +0000 UTC m=+0.065882234 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 06 10:18:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:17.687Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:17 compute-0 sudo[287712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:18:17 compute-0 sudo[287712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:17 compute-0 sudo[287712]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:17 compute-0 sudo[287737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:18:17 compute-0 sudo[287737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:18 compute-0 sudo[287737]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:18:18 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:18:18 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:18:18 compute-0 sudo[287795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:18:18 compute-0 sudo[287795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:18 compute-0 sudo[287795]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:18 compute-0 sudo[287820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:18:18 compute-0 sudo[287820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:19 compute-0 ceph-mon[74327]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:18:19 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.059262707 +0000 UTC m=+0.039247600 container create 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 10:18:19 compute-0 nova_compute[254819]: 2025-12-06 10:18:19.094 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:19 compute-0 systemd[1]: Started libpod-conmon-57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b.scope.
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.041913045 +0000 UTC m=+0.021897958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.162543868 +0000 UTC m=+0.142528781 container init 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.169370223 +0000 UTC m=+0.149355116 container start 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.172756145 +0000 UTC m=+0.152741038 container attach 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 06 10:18:19 compute-0 sharp_zhukovsky[287902]: 167 167
Dec 06 10:18:19 compute-0 systemd[1]: libpod-57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b.scope: Deactivated successfully.
Dec 06 10:18:19 compute-0 conmon[287902]: conmon 57fda08169247040546b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b.scope/container/memory.events
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.177783993 +0000 UTC m=+0.157768886 container died 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 10:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17dfd92ea626f2d2f207ec32f83474227cb28b77ecec0838914bc4fd29e7c5e-merged.mount: Deactivated successfully.
Dec 06 10:18:19 compute-0 podman[287886]: 2025-12-06 10:18:19.21184577 +0000 UTC m=+0.191830663 container remove 57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:18:19 compute-0 systemd[1]: libpod-conmon-57fda08169247040546bb04005be122fe76bb36a91f2e1951d545b3ae354976b.scope: Deactivated successfully.
Dec 06 10:18:19 compute-0 rsyslogd[1004]: imjournal from <np0005548915:podman>: begin to drop messages due to rate-limiting
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.403914408 +0000 UTC m=+0.054787313 container create d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 10:18:19 compute-0 systemd[1]: Started libpod-conmon-d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4.scope.
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.375228037 +0000 UTC m=+0.026100962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.517013096 +0000 UTC m=+0.167885991 container init d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.526428333 +0000 UTC m=+0.177301218 container start d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.530149394 +0000 UTC m=+0.181022269 container attach d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:18:19 compute-0 funny_mcclintock[287943]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:18:19 compute-0 funny_mcclintock[287943]: --> All data devices are unavailable
Dec 06 10:18:19 compute-0 systemd[1]: libpod-d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4.scope: Deactivated successfully.
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.873177281 +0000 UTC m=+0.524050186 container died d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 10:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-69dae2a1601b32265f8695a61f0f337c3db44da9210ea94582c637d914912b8a-merged.mount: Deactivated successfully.
Dec 06 10:18:19 compute-0 podman[287927]: 2025-12-06 10:18:19.923766278 +0000 UTC m=+0.574639153 container remove d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 10:18:19 compute-0 systemd[1]: libpod-conmon-d92be5a799bdc4d3021465192e45358a9565a52ba7389956a0e2a5b399ad71c4.scope: Deactivated successfully.
Dec 06 10:18:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:20 compute-0 sudo[287820]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:18:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:18:20 compute-0 sudo[287970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:18:20 compute-0 sudo[287970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:20 compute-0 sudo[287970]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:20 compute-0 ceph-mon[74327]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:20 compute-0 sudo[287995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:18:20 compute-0 sudo[287995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.586553999 +0000 UTC m=+0.043638770 container create 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:18:20 compute-0 systemd[1]: Started libpod-conmon-3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e.scope.
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.57045281 +0000 UTC m=+0.027537601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.686980142 +0000 UTC m=+0.144064913 container init 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.69681689 +0000 UTC m=+0.153901661 container start 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.700047227 +0000 UTC m=+0.157132018 container attach 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:18:20 compute-0 youthful_hodgkin[288076]: 167 167
Dec 06 10:18:20 compute-0 systemd[1]: libpod-3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e.scope: Deactivated successfully.
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.703606925 +0000 UTC m=+0.160691736 container died 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da255715490b723850726ff8625e1f6bfc8098edd50011c7009796758420be7-merged.mount: Deactivated successfully.
Dec 06 10:18:20 compute-0 podman[288060]: 2025-12-06 10:18:20.761197622 +0000 UTC m=+0.218282393 container remove 3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:18:20 compute-0 systemd[1]: libpod-conmon-3974cce4574f608203d1356dedf8acc05331ac8998742d8f8340fc13fca5794e.scope: Deactivated successfully.
Dec 06 10:18:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:18:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:18:20 compute-0 podman[288101]: 2025-12-06 10:18:20.903316591 +0000 UTC m=+0.039074265 container create 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:18:20 compute-0 systemd[1]: Started libpod-conmon-88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e.scope.
Dec 06 10:18:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fd4d5173f543533afe2aa1face20c82360960de62ddd90d490e2a5df044071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fd4d5173f543533afe2aa1face20c82360960de62ddd90d490e2a5df044071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fd4d5173f543533afe2aa1face20c82360960de62ddd90d490e2a5df044071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fd4d5173f543533afe2aa1face20c82360960de62ddd90d490e2a5df044071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:20 compute-0 podman[288101]: 2025-12-06 10:18:20.887048308 +0000 UTC m=+0.022806012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:20 compute-0 podman[288101]: 2025-12-06 10:18:20.983821232 +0000 UTC m=+0.119578936 container init 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec 06 10:18:21 compute-0 podman[288101]: 2025-12-06 10:18:21.002640075 +0000 UTC m=+0.138397749 container start 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:18:21 compute-0 podman[288101]: 2025-12-06 10:18:21.018200818 +0000 UTC m=+0.153958522 container attach 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 10:18:21 compute-0 competent_benz[288117]: {
Dec 06 10:18:21 compute-0 competent_benz[288117]:     "1": [
Dec 06 10:18:21 compute-0 competent_benz[288117]:         {
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "devices": [
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "/dev/loop3"
Dec 06 10:18:21 compute-0 competent_benz[288117]:             ],
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "lv_name": "ceph_lv0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "lv_size": "21470642176",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "name": "ceph_lv0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "tags": {
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.cluster_name": "ceph",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.crush_device_class": "",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.encrypted": "0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.osd_id": "1",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.type": "block",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.vdo": "0",
Dec 06 10:18:21 compute-0 competent_benz[288117]:                 "ceph.with_tpm": "0"
Dec 06 10:18:21 compute-0 competent_benz[288117]:             },
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "type": "block",
Dec 06 10:18:21 compute-0 competent_benz[288117]:             "vg_name": "ceph_vg0"
Dec 06 10:18:21 compute-0 competent_benz[288117]:         }
Dec 06 10:18:21 compute-0 competent_benz[288117]:     ]
Dec 06 10:18:21 compute-0 competent_benz[288117]: }
Dec 06 10:18:21 compute-0 systemd[1]: libpod-88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e.scope: Deactivated successfully.
Dec 06 10:18:21 compute-0 podman[288101]: 2025-12-06 10:18:21.249971717 +0000 UTC m=+0.385729391 container died 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec 06 10:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-23fd4d5173f543533afe2aa1face20c82360960de62ddd90d490e2a5df044071-merged.mount: Deactivated successfully.
Dec 06 10:18:21 compute-0 podman[288101]: 2025-12-06 10:18:21.295472735 +0000 UTC m=+0.431230409 container remove 88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:18:21 compute-0 systemd[1]: libpod-conmon-88eb143389eb58fe16bba326ee79c10338b7245b47a0ce316a93125b0eba940e.scope: Deactivated successfully.
Dec 06 10:18:21 compute-0 sudo[287995]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:21 compute-0 sudo[288138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:18:21 compute-0 sudo[288138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:21 compute-0 sudo[288138]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:21 compute-0 sudo[288169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:18:21 compute-0 sudo[288169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:21 compute-0 podman[288162]: 2025-12-06 10:18:21.535452008 +0000 UTC m=+0.106346376 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:18:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:21 compute-0 podman[288257]: 2025-12-06 10:18:21.973364047 +0000 UTC m=+0.051836211 container create f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:18:22 compute-0 nova_compute[254819]: 2025-12-06 10:18:22.019 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:22 compute-0 systemd[1]: Started libpod-conmon-f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187.scope.
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:21.956081977 +0000 UTC m=+0.034554141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:22.074004857 +0000 UTC m=+0.152477021 container init f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:22.081725897 +0000 UTC m=+0.160198071 container start f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:22.085313114 +0000 UTC m=+0.163785348 container attach f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:18:22 compute-0 charming_raman[288273]: 167 167
Dec 06 10:18:22 compute-0 systemd[1]: libpod-f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187.scope: Deactivated successfully.
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:22.089803057 +0000 UTC m=+0.168275211 container died f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0118a1d73e5419e3882c8d3722787ced6ca8f7002de009c6bafb8becda8165-merged.mount: Deactivated successfully.
Dec 06 10:18:22 compute-0 podman[288257]: 2025-12-06 10:18:22.133084105 +0000 UTC m=+0.211556279 container remove f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:18:22 compute-0 systemd[1]: libpod-conmon-f12eeeeef2475603f61be5baf49044542cdc8df0d7d3990c7adb52bdad0c4187.scope: Deactivated successfully.
Dec 06 10:18:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:22 compute-0 podman[288296]: 2025-12-06 10:18:22.366730664 +0000 UTC m=+0.052350165 container create 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:18:22 compute-0 systemd[1]: Started libpod-conmon-07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1.scope.
Dec 06 10:18:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2534fb13911d264ce932eda0ccffe27b6e0d8488ddeddbebe3e955db2f3a36d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:22 compute-0 podman[288296]: 2025-12-06 10:18:22.347555473 +0000 UTC m=+0.033175014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2534fb13911d264ce932eda0ccffe27b6e0d8488ddeddbebe3e955db2f3a36d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2534fb13911d264ce932eda0ccffe27b6e0d8488ddeddbebe3e955db2f3a36d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2534fb13911d264ce932eda0ccffe27b6e0d8488ddeddbebe3e955db2f3a36d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:22 compute-0 podman[288296]: 2025-12-06 10:18:22.463511759 +0000 UTC m=+0.149131280 container init 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec 06 10:18:22 compute-0 podman[288296]: 2025-12-06 10:18:22.471608249 +0000 UTC m=+0.157227740 container start 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:18:22 compute-0 podman[288296]: 2025-12-06 10:18:22.474766275 +0000 UTC m=+0.160385776 container attach 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 10:18:23 compute-0 ceph-mon[74327]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:23 compute-0 lvm[288386]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:18:23 compute-0 lvm[288386]: VG ceph_vg0 finished
Dec 06 10:18:23 compute-0 kind_brown[288312]: {}
Dec 06 10:18:23 compute-0 systemd[1]: libpod-07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1.scope: Deactivated successfully.
Dec 06 10:18:23 compute-0 systemd[1]: libpod-07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1.scope: Consumed 1.169s CPU time.
Dec 06 10:18:23 compute-0 podman[288296]: 2025-12-06 10:18:23.21562003 +0000 UTC m=+0.901239531 container died 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2534fb13911d264ce932eda0ccffe27b6e0d8488ddeddbebe3e955db2f3a36d1-merged.mount: Deactivated successfully.
Dec 06 10:18:23 compute-0 podman[288296]: 2025-12-06 10:18:23.269142838 +0000 UTC m=+0.954762369 container remove 07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_brown, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:18:23 compute-0 systemd[1]: libpod-conmon-07ad62c7b5d0ab7f4e3f4d0aaebfd18857e67d069050b76a10f9c5476f6139f1.scope: Deactivated successfully.
Dec 06 10:18:23 compute-0 sudo[288169]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:18:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:18:23 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:23 compute-0 sudo[288402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:18:23 compute-0 sudo[288402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:23 compute-0 sudo[288402]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:18:23
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', '.rgw.root', 'images', '.nfs', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'vms']
Dec 06 10:18:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:18:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:18:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:24 compute-0 nova_compute[254819]: 2025-12-06 10:18:24.097 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:18:24 compute-0 ceph-mon[74327]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:18:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:18:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:26.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:26.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:26 compute-0 ceph-mon[74327]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:26 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-crash-compute-0[79850]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Dec 06 10:18:27 compute-0 nova_compute[254819]: 2025-12-06 10:18:27.024 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:27.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:28 compute-0 podman[288433]: 2025-12-06 10:18:28.444675843 +0000 UTC m=+0.069105702 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:18:29 compute-0 ceph-mon[74327]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:29 compute-0 nova_compute[254819]: 2025-12-06 10:18:29.101 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 10:18:29 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 10:18:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:30] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:30] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:31 compute-0 ceph-mon[74327]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:32 compute-0 nova_compute[254819]: 2025-12-06 10:18:32.026 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:18:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:18:32 compute-0 ceph-mon[74327]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:32.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:34 compute-0 nova_compute[254819]: 2025-12-06 10:18:34.105 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:34.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:35 compute-0 ceph-mon[74327]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:35 compute-0 sudo[288464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:18:35 compute-0 sudo[288464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:35 compute-0 sudo[288464]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:36.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:37 compute-0 nova_compute[254819]: 2025-12-06 10:18:37.027 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:37 compute-0 ceph-mon[74327]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:37.689Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:18:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:38.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:18:38 compute-0 ceph-mon[74327]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:18:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:39 compute-0 nova_compute[254819]: 2025-12-06 10:18:39.109 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:40.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:40 compute-0 ceph-mon[74327]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:40.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:40] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:40] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:42 compute-0 nova_compute[254819]: 2025-12-06 10:18:42.027 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:43 compute-0 ceph-mon[74327]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:44.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:44 compute-0 nova_compute[254819]: 2025-12-06 10:18:44.112 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:45 compute-0 ceph-mon[74327]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3435933179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:18:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:45 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3435933179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:18:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:46 compute-0 ceph-mon[74327]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3435933179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:18:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/3435933179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:18:47 compute-0 nova_compute[254819]: 2025-12-06 10:18:47.031 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:47 compute-0 podman[288499]: 2025-12-06 10:18:47.622609249 +0000 UTC m=+0.065804953 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:18:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:47.690Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:48 compute-0 ceph-mon[74327]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.771 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.772 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:18:48 compute-0 nova_compute[254819]: 2025-12-06 10:18:48.772 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.150 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:18:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/792511093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.211 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:18:49 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/792511093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.432 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.434 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4352MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.435 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.435 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.502 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.503 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.522 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:18:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:18:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909379044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:49 compute-0 nova_compute[254819]: 2025-12-06 10:18:49.996 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:18:50 compute-0 nova_compute[254819]: 2025-12-06 10:18:50.002 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:18:50 compute-0 nova_compute[254819]: 2025-12-06 10:18:50.018 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:18:50 compute-0 nova_compute[254819]: 2025-12-06 10:18:50.020 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:18:50 compute-0 nova_compute[254819]: 2025-12-06 10:18:50.020 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2553381759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:50 compute-0 ceph-mon[74327]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:50 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1909379044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:18:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:18:51 compute-0 nova_compute[254819]: 2025-12-06 10:18:51.021 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:51 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/748120430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:51 compute-0 nova_compute[254819]: 2025-12-06 10:18:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:52 compute-0 nova_compute[254819]: 2025-12-06 10:18:52.031 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:52 compute-0 ceph-mon[74327]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:52 compute-0 podman[288570]: 2025-12-06 10:18:52.499243919 +0000 UTC m=+0.122006992 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:18:52 compute-0 nova_compute[254819]: 2025-12-06 10:18:52.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:52 compute-0 nova_compute[254819]: 2025-12-06 10:18:52.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:18:52 compute-0 nova_compute[254819]: 2025-12-06 10:18:52.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:18:52 compute-0 nova_compute[254819]: 2025-12-06 10:18:52.779 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:18:53 compute-0 nova_compute[254819]: 2025-12-06 10:18:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:53 compute-0 nova_compute[254819]: 2025-12-06 10:18:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:53 compute-0 nova_compute[254819]: 2025-12-06 10:18:53.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:18:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:18:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:18:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:54.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:54 compute-0 nova_compute[254819]: 2025-12-06 10:18:54.152 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:18:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:18:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:18:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:54 compute-0 nova_compute[254819]: 2025-12-06 10:18:54.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:54 compute-0 nova_compute[254819]: 2025-12-06 10:18:54.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:55 compute-0 ceph-mon[74327]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:55 compute-0 nova_compute[254819]: 2025-12-06 10:18:55.743 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:55 compute-0 nova_compute[254819]: 2025-12-06 10:18:55.770 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:55 compute-0 sudo[288601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:18:55 compute-0 sudo[288601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:18:55 compute-0 sudo[288601]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:56.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:56 compute-0 sudo[279764]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:56 compute-0 sshd-session[279758]: Received disconnect from 192.168.122.10 port 44722:11: disconnected by user
Dec 06 10:18:56 compute-0 sshd-session[279758]: Disconnected from user zuul 192.168.122.10 port 44722
Dec 06 10:18:56 compute-0 sshd-session[279739]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:18:56 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 06 10:18:56 compute-0 systemd[1]: session-56.scope: Consumed 3min 1.487s CPU time, 884.8M memory peak, read 366.2M from disk, written 81.6M to disk.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Session 56 logged out. Waiting for processes to exit.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Removed session 56.
Dec 06 10:18:56 compute-0 ceph-mon[74327]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:18:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:18:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:18:56 compute-0 sshd-session[288626]: Accepted publickey for zuul from 192.168.122.10 port 46802 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 10:18:56 compute-0 systemd-logind[795]: New session 57 of user zuul.
Dec 06 10:18:56 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec 06 10:18:56 compute-0 sshd-session[288626]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 10:18:56 compute-0 sudo[288630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-06-azixeac.tar.xz
Dec 06 10:18:56 compute-0 sudo[288630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:18:56 compute-0 sudo[288630]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:56 compute-0 sshd-session[288629]: Received disconnect from 192.168.122.10 port 46802:11: disconnected by user
Dec 06 10:18:56 compute-0 sshd-session[288629]: Disconnected from user zuul 192.168.122.10 port 46802
Dec 06 10:18:56 compute-0 sshd-session[288626]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:18:56 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Session 57 logged out. Waiting for processes to exit.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Removed session 57.
Dec 06 10:18:56 compute-0 sshd-session[288655]: Accepted publickey for zuul from 192.168.122.10 port 46804 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 10:18:56 compute-0 systemd-logind[795]: New session 58 of user zuul.
Dec 06 10:18:56 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec 06 10:18:56 compute-0 sshd-session[288655]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 10:18:56 compute-0 sudo[288659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 06 10:18:56 compute-0 sudo[288659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:18:56 compute-0 sudo[288659]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:56 compute-0 sshd-session[288658]: Received disconnect from 192.168.122.10 port 46804:11: disconnected by user
Dec 06 10:18:56 compute-0 sshd-session[288658]: Disconnected from user zuul 192.168.122.10 port 46804
Dec 06 10:18:56 compute-0 sshd-session[288655]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:18:56 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Session 58 logged out. Waiting for processes to exit.
Dec 06 10:18:56 compute-0 systemd-logind[795]: Removed session 58.
Dec 06 10:18:57 compute-0 nova_compute[254819]: 2025-12-06 10:18:57.034 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:18:57.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:18:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3649759905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:18:58.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:18:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:18:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:18:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:18:59 compute-0 ceph-mon[74327]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:18:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3542070725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:18:59 compute-0 nova_compute[254819]: 2025-12-06 10:18:59.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:59 compute-0 podman[288686]: 2025-12-06 10:18:59.47955284 +0000 UTC m=+0.097155585 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:18:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:18:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:00.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:19:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:19:01 compute-0 ceph-mon[74327]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:02 compute-0 nova_compute[254819]: 2025-12-06 10:19:02.036 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:03 compute-0 ceph-mon[74327]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:04 compute-0 ceph-mon[74327]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:04 compute-0 nova_compute[254819]: 2025-12-06 10:19:04.159 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:07 compute-0 nova_compute[254819]: 2025-12-06 10:19:07.040 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:07 compute-0 ceph-mon[74327]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:07.692Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:19:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:19:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:09 compute-0 ceph-mon[74327]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:09 compute-0 nova_compute[254819]: 2025-12-06 10:19:09.201 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:10.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:19:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:19:11 compute-0 ceph-mon[74327]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:12 compute-0 nova_compute[254819]: 2025-12-06 10:19:12.041 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:12.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:13 compute-0 ceph-mon[74327]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:13 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:14.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:14 compute-0 ceph-mon[74327]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:14 compute-0 nova_compute[254819]: 2025-12-06 10:19:14.202 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:15 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:16 compute-0 sudo[288721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:19:16 compute-0 sudo[288721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:16 compute-0 sudo[288721]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:17 compute-0 ceph-mon[74327]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:17 compute-0 nova_compute[254819]: 2025-12-06 10:19:17.043 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:17.693Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:19:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:17.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:19:17 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:18 compute-0 podman[288748]: 2025-12-06 10:19:18.458372078 +0000 UTC m=+0.074952421 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 06 10:19:19 compute-0 ceph-mon[74327]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:19 compute-0 nova_compute[254819]: 2025-12-06 10:19:19.206 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:19 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:20.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:19:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:19:21 compute-0 ceph-mon[74327]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:21 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:22 compute-0 nova_compute[254819]: 2025-12-06 10:19:22.046 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:22.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:23 compute-0 ceph-mon[74327]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:23 compute-0 podman[288775]: 2025-12-06 10:19:23.491590088 +0000 UTC m=+0.117208142 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:19:23 compute-0 sudo[288802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:23 compute-0 sudo[288802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:23 compute-0 sudo[288802]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:23 compute-0 sudo[288829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 06 10:19:23 compute-0 sudo[288829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:19:23
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.meta', '.nfs', 'default.rgw.control', 'backups', 'images', 'vms', 'volumes', 'cephfs.cephfs.data']
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:19:23 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:19:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:24 compute-0 sudo[288829]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:19:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:19:24 compute-0 nova_compute[254819]: 2025-12-06 10:19:24.207 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:19:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:19:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:24 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:19:24 compute-0 sudo[288875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:24 compute-0 sudo[288875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:24 compute-0 sudo[288875]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:19:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:19:24 compute-0 sudo[288900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:19:24 compute-0 sudo[288900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:25 compute-0 sudo[288900]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:25 compute-0 ceph-mon[74327]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:19:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:19:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:19:25 compute-0 sudo[288958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:25 compute-0 sudo[288958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:25 compute-0 sudo[288958]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:25 compute-0 sudo[288983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:19:25 compute-0 sudo[288983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.709077437 +0000 UTC m=+0.046732973 container create 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:19:25 compute-0 systemd[1]: Started libpod-conmon-38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925.scope.
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.686809191 +0000 UTC m=+0.024464717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.811089173 +0000 UTC m=+0.148744769 container init 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.819611605 +0000 UTC m=+0.157267121 container start 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.823152102 +0000 UTC m=+0.160807698 container attach 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 06 10:19:25 compute-0 heuristic_wozniak[289069]: 167 167
Dec 06 10:19:25 compute-0 systemd[1]: libpod-38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925.scope: Deactivated successfully.
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.82674263 +0000 UTC m=+0.164398146 container died 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-708f2543786af9eaec1df215ddabbdf57f75f0487c8ca23a6e4f630c0c0499e7-merged.mount: Deactivated successfully.
Dec 06 10:19:25 compute-0 podman[289049]: 2025-12-06 10:19:25.879567757 +0000 UTC m=+0.217223263 container remove 38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:19:25 compute-0 systemd[1]: libpod-conmon-38de222821ded8b327e0b60a691ed76ee09d2e08b41564baf3876b7ab2b06925.scope: Deactivated successfully.
Dec 06 10:19:25 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.08577516 +0000 UTC m=+0.059770217 container create 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:19:26 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:19:26 compute-0 ceph-mon[74327]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:26 compute-0 systemd[1]: Started libpod-conmon-195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8.scope.
Dec 06 10:19:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.064920153 +0000 UTC m=+0.038915220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.177725053 +0000 UTC m=+0.151720120 container init 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.184626951 +0000 UTC m=+0.158621998 container start 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.189800472 +0000 UTC m=+0.163795519 container attach 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:19:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:26 compute-0 exciting_gauss[289113]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:19:26 compute-0 exciting_gauss[289113]: --> All data devices are unavailable
Dec 06 10:19:26 compute-0 systemd[1]: libpod-195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8.scope: Deactivated successfully.
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.540566629 +0000 UTC m=+0.514561696 container died 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a90d04bc75a96c0b4a2f827cb6327819001ea22ca8b95b60dd4e8966b6e493-merged.mount: Deactivated successfully.
Dec 06 10:19:26 compute-0 podman[289096]: 2025-12-06 10:19:26.590823937 +0000 UTC m=+0.564818994 container remove 195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:19:26 compute-0 systemd[1]: libpod-conmon-195053a8ec4994243028cbe88710b0daf50bf29fc1d1276b6c123b85680572e8.scope: Deactivated successfully.
Dec 06 10:19:26 compute-0 sudo[288983]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:26 compute-0 sudo[289143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:26 compute-0 sudo[289143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:26 compute-0 sudo[289143]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:26 compute-0 sudo[289168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:19:26 compute-0 sudo[289168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:27 compute-0 nova_compute[254819]: 2025-12-06 10:19:27.049 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.264168766 +0000 UTC m=+0.064116626 container create 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec 06 10:19:27 compute-0 systemd[1]: Started libpod-conmon-5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb.scope.
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.24227882 +0000 UTC m=+0.042226720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.363623883 +0000 UTC m=+0.163571833 container init 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.373852422 +0000 UTC m=+0.173800282 container start 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.377925792 +0000 UTC m=+0.177873692 container attach 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 10:19:27 compute-0 inspiring_gagarin[289251]: 167 167
Dec 06 10:19:27 compute-0 systemd[1]: libpod-5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb.scope: Deactivated successfully.
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.380073421 +0000 UTC m=+0.180021281 container died 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 10:19:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a0e5a292b5b6b0356ffea6f4997ea9ad726f4522161cb3915ab502cba15f013-merged.mount: Deactivated successfully.
Dec 06 10:19:27 compute-0 podman[289235]: 2025-12-06 10:19:27.430874373 +0000 UTC m=+0.230822243 container remove 5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_gagarin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:19:27 compute-0 systemd[1]: libpod-conmon-5e8c2b5f2d4734f6ee28b57c8e0811e82a414450656b3ef03b34e87002c8c0bb.scope: Deactivated successfully.
Dec 06 10:19:27 compute-0 podman[289277]: 2025-12-06 10:19:27.612197189 +0000 UTC m=+0.041801059 container create c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:19:27 compute-0 systemd[1]: Started libpod-conmon-c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8.scope.
Dec 06 10:19:27 compute-0 podman[289277]: 2025-12-06 10:19:27.593902171 +0000 UTC m=+0.023506061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:27.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1b268f1fb53a9c8ae8358fecb0063cf212216c6778e956b175671ff85adb8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1b268f1fb53a9c8ae8358fecb0063cf212216c6778e956b175671ff85adb8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1b268f1fb53a9c8ae8358fecb0063cf212216c6778e956b175671ff85adb8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1b268f1fb53a9c8ae8358fecb0063cf212216c6778e956b175671ff85adb8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:27 compute-0 podman[289277]: 2025-12-06 10:19:27.711095141 +0000 UTC m=+0.140699041 container init c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 10:19:27 compute-0 podman[289277]: 2025-12-06 10:19:27.718485562 +0000 UTC m=+0.148089442 container start c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:19:27 compute-0 podman[289277]: 2025-12-06 10:19:27.721765891 +0000 UTC m=+0.151369801 container attach c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:19:27 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]: {
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:     "1": [
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:         {
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "devices": [
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "/dev/loop3"
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             ],
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "lv_name": "ceph_lv0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "lv_size": "21470642176",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "name": "ceph_lv0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "tags": {
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.cluster_name": "ceph",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.crush_device_class": "",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.encrypted": "0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.osd_id": "1",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.type": "block",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.vdo": "0",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:                 "ceph.with_tpm": "0"
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             },
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "type": "block",
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:             "vg_name": "ceph_vg0"
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:         }
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]:     ]
Dec 06 10:19:28 compute-0 wizardly_dhawan[289293]: }
Dec 06 10:19:28 compute-0 systemd[1]: libpod-c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8.scope: Deactivated successfully.
Dec 06 10:19:28 compute-0 podman[289277]: 2025-12-06 10:19:28.032365796 +0000 UTC m=+0.461969696 container died c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad1b268f1fb53a9c8ae8358fecb0063cf212216c6778e956b175671ff85adb8f-merged.mount: Deactivated successfully.
Dec 06 10:19:28 compute-0 podman[289277]: 2025-12-06 10:19:28.078930893 +0000 UTC m=+0.508534773 container remove c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:19:28 compute-0 systemd[1]: libpod-conmon-c529d7f6d4968c0b90a81424a6ae0bd96d296c796a19ed27721a9a1b4cd55bf8.scope: Deactivated successfully.
Dec 06 10:19:28 compute-0 sudo[289168]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:28 compute-0 sudo[289316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:28 compute-0 sudo[289316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:28 compute-0 sudo[289316]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:28 compute-0 sudo[289341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:19:28 compute-0 sudo[289341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.777440287 +0000 UTC m=+0.069369860 container create 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:19:28 compute-0 systemd[1]: Started libpod-conmon-37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33.scope.
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.752357424 +0000 UTC m=+0.044286977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.887379109 +0000 UTC m=+0.179308662 container init 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.898405649 +0000 UTC m=+0.190335182 container start 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.903036125 +0000 UTC m=+0.194965708 container attach 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:19:28 compute-0 naughty_goldstine[289423]: 167 167
Dec 06 10:19:28 compute-0 systemd[1]: libpod-37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33.scope: Deactivated successfully.
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.905144012 +0000 UTC m=+0.197073545 container died 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea2770be7913719fba8e3028c6e20a5fb2302c7e8f38d0d5e43cd56f50918bb-merged.mount: Deactivated successfully.
Dec 06 10:19:28 compute-0 podman[289407]: 2025-12-06 10:19:28.951753101 +0000 UTC m=+0.243682624 container remove 37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 06 10:19:28 compute-0 systemd[1]: libpod-conmon-37ff5f9b7bdf139ab493c1b95268bf90a8f3e5e38a744f057ce7e5fb2d57ae33.scope: Deactivated successfully.
Dec 06 10:19:29 compute-0 ceph-mon[74327]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:29 compute-0 podman[289447]: 2025-12-06 10:19:29.137100536 +0000 UTC m=+0.052754927 container create 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 06 10:19:29 compute-0 systemd[1]: Started libpod-conmon-6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345.scope.
Dec 06 10:19:29 compute-0 podman[289447]: 2025-12-06 10:19:29.115341553 +0000 UTC m=+0.030995924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:19:29 compute-0 nova_compute[254819]: 2025-12-06 10:19:29.210 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffa945642e043a77cdcc905fd533a995cf40051c863005e74178ed9e444cf5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffa945642e043a77cdcc905fd533a995cf40051c863005e74178ed9e444cf5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffa945642e043a77cdcc905fd533a995cf40051c863005e74178ed9e444cf5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ffa945642e043a77cdcc905fd533a995cf40051c863005e74178ed9e444cf5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:29 compute-0 podman[289447]: 2025-12-06 10:19:29.264103474 +0000 UTC m=+0.179757825 container init 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:19:29 compute-0 podman[289447]: 2025-12-06 10:19:29.282824843 +0000 UTC m=+0.198479234 container start 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:19:29 compute-0 podman[289447]: 2025-12-06 10:19:29.287464579 +0000 UTC m=+0.203118950 container attach 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 10:19:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:29 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:30 compute-0 lvm[289549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:19:30 compute-0 lvm[289549]: VG ceph_vg0 finished
Dec 06 10:19:30 compute-0 sleepy_babbage[289463]: {}
Dec 06 10:19:30 compute-0 podman[289538]: 2025-12-06 10:19:30.094428824 +0000 UTC m=+0.075823464 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:19:30 compute-0 systemd[1]: libpod-6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345.scope: Deactivated successfully.
Dec 06 10:19:30 compute-0 systemd[1]: libpod-6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345.scope: Consumed 1.373s CPU time.
Dec 06 10:19:30 compute-0 podman[289447]: 2025-12-06 10:19:30.111006836 +0000 UTC m=+1.026661187 container died 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:19:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ffa945642e043a77cdcc905fd533a995cf40051c863005e74178ed9e444cf5e-merged.mount: Deactivated successfully.
Dec 06 10:19:30 compute-0 podman[289447]: 2025-12-06 10:19:30.162670882 +0000 UTC m=+1.078325233 container remove 6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:19:30 compute-0 systemd[1]: libpod-conmon-6105d6cf21fc86863b97b5526c08c6adc887e8398a53f9af1ec285c909d30345.scope: Deactivated successfully.
Dec 06 10:19:30 compute-0 sudo[289341]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:19:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:19:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:30 compute-0 sudo[289574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:19:30 compute-0 sudo[289574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:30 compute-0 sudo[289574]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:19:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:19:31 compute-0 ceph-mon[74327]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:19:31 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:32 compute-0 nova_compute[254819]: 2025-12-06 10:19:32.054 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:32.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:33 compute-0 ceph-mon[74327]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:33 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:34.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:34 compute-0 nova_compute[254819]: 2025-12-06 10:19:34.214 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:35 compute-0 ceph-mon[74327]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:35 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:36 compute-0 sudo[289605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:19:36 compute-0 sudo[289605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:36 compute-0 sudo[289605]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:36.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:36 compute-0 ceph-mon[74327]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:37 compute-0 nova_compute[254819]: 2025-12-06 10:19:37.055 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:19:37 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4027 syncs, 3.40 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2640 writes, 8560 keys, 2640 commit groups, 1.0 writes per commit group, ingest: 7.62 MB, 0.01 MB/s
                                           Interval WAL: 2640 writes, 1122 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:19:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:37.696Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:19:37 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:38.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:19:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:39 compute-0 ceph-mon[74327]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:39 compute-0 nova_compute[254819]: 2025-12-06 10:19:39.219 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:39 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:19:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:19:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:19:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:19:41 compute-0 ceph-mon[74327]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:41 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:42 compute-0 nova_compute[254819]: 2025-12-06 10:19:42.057 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:19:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:19:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:42.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:43 compute-0 ceph-mon[74327]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:43 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:44 compute-0 ceph-mon[74327]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:44 compute-0 nova_compute[254819]: 2025-12-06 10:19:44.223 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:45 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:46.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:47 compute-0 ceph-mon[74327]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/1961422206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:19:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/1961422206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:19:47 compute-0 nova_compute[254819]: 2025-12-06 10:19:47.058 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:47.697Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:19:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:47.698Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:19:47 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:19:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:19:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:49 compute-0 ceph-mon[74327]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:49 compute-0 nova_compute[254819]: 2025-12-06 10:19:49.226 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:49 compute-0 podman[289642]: 2025-12-06 10:19:49.449319789 +0000 UTC m=+0.077428588 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:19:49 compute-0 nova_compute[254819]: 2025-12-06 10:19:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:49 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.845 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.845 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.845 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.845 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:19:50 compute-0 nova_compute[254819]: 2025-12-06 10:19:50.846 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:19:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:19:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:19:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:19:51 compute-0 ceph-mon[74327]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:19:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562345917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.298 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.463 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.464 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4451MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.464 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.464 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.541 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.542 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:19:51 compute-0 nova_compute[254819]: 2025-12-06 10:19:51.561 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:19:51 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:19:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289647203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.012 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.022 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.060 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.063 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.064 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:52 compute-0 nova_compute[254819]: 2025-12-06 10:19:52.065 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2562345917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3584782970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1289647203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:53 compute-0 nova_compute[254819]: 2025-12-06 10:19:53.069 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:53 compute-0 ceph-mon[74327]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3001520887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:53 compute-0 nova_compute[254819]: 2025-12-06 10:19:53.741 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:19:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:53 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:19:54 compute-0 ceph-mon[74327]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:54.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.230 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:19:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:19:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:19:54.249 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:54 compute-0 podman[289712]: 2025-12-06 10:19:54.471686326 +0000 UTC m=+0.098126302 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.783 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.783 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.784 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.785 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:54 compute-0 nova_compute[254819]: 2025-12-06 10:19:54.785 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:19:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:55 compute-0 nova_compute[254819]: 2025-12-06 10:19:55.751 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:55 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:56.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:56 compute-0 sudo[289742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:19:56 compute-0 sudo[289742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:19:56 compute-0 sudo[289742]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:57 compute-0 nova_compute[254819]: 2025-12-06 10:19:57.061 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:57 compute-0 ceph-mon[74327]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:19:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:19:57.699Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:19:57 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/502386387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:19:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:19:58.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:19:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:19:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:19:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:19:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:19:59 compute-0 ceph-mon[74327]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:19:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4279224093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:19:59 compute-0 nova_compute[254819]: 2025-12-06 10:19:59.233 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:19:59 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:00 compute-0 ceph-mon[74327]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 10:20:00 compute-0 ceph-mon[74327]: overall HEALTH_OK
Dec 06 10:20:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:00.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:20:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:20:00 compute-0 podman[289771]: 2025-12-06 10:20:00.456103348 +0000 UTC m=+0.079317130 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 06 10:20:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:20:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:20:01 compute-0 ceph-mon[74327]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:01 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:02 compute-0 nova_compute[254819]: 2025-12-06 10:20:02.063 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:02 compute-0 ceph-mon[74327]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:02.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:03 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:04 compute-0 nova_compute[254819]: 2025-12-06 10:20:04.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:05 compute-0 ceph-mon[74327]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:05 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:07 compute-0 nova_compute[254819]: 2025-12-06 10:20:07.064 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 compute-0 ceph-mon[74327]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:07.700Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:20:07 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:08.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:20:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:09 compute-0 ceph-mon[74327]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:09 compute-0 nova_compute[254819]: 2025-12-06 10:20:09.240 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:09 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:10 compute-0 ceph-mon[74327]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:20:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:20:11 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:12 compute-0 nova_compute[254819]: 2025-12-06 10:20:12.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:13 compute-0 ceph-mon[74327]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:14.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:14 compute-0 nova_compute[254819]: 2025-12-06 10:20:14.243 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:15 compute-0 ceph-mon[74327]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:16 compute-0 sudo[289806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:20:16 compute-0 sudo[289806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:16 compute-0 sudo[289806]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:17 compute-0 nova_compute[254819]: 2025-12-06 10:20:17.069 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:17 compute-0 ceph-mon[74327]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:17.702Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:20:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:19 compute-0 ceph-mon[74327]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:19 compute-0 nova_compute[254819]: 2025-12-06 10:20:19.288 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:20 compute-0 ceph-mon[74327]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:20.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:20 compute-0 podman[289835]: 2025-12-06 10:20:20.459488075 +0000 UTC m=+0.078044566 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:20:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:20:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:22 compute-0 nova_compute[254819]: 2025-12-06 10:20:22.069 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:23 compute-0 ceph-mon[74327]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:20:23
Dec 06 10:20:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:20:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:20:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'backups', '.mgr', 'default.rgw.log']
Dec 06 10:20:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:20:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:20:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:24 compute-0 nova_compute[254819]: 2025-12-06 10:20:24.292 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:20:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:20:24 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:25 compute-0 ceph-mon[74327]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:25 compute-0 podman[289859]: 2025-12-06 10:20:25.5072127 +0000 UTC m=+0.131774078 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 06 10:20:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:26 compute-0 ceph-mon[74327]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:26.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:27 compute-0 nova_compute[254819]: 2025-12-06 10:20:27.071 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:27.702Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:20:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:29 compute-0 ceph-mon[74327]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:29 compute-0 nova_compute[254819]: 2025-12-06 10:20:29.295 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:30 compute-0 sudo[289893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:20:30 compute-0 sudo[289893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:30 compute-0 sudo[289893]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:30 compute-0 sudo[289924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:20:30 compute-0 sudo[289924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:30 compute-0 podman[289917]: 2025-12-06 10:20:30.695183783 +0000 UTC m=+0.047272968 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 10:20:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:20:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:30] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:20:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:20:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:20:30 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:31 compute-0 ceph-mon[74327]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:31 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:31 compute-0 sudo[289924]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:20:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:20:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:20:31 compute-0 sudo[289995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:20:31 compute-0 sudo[289995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:31 compute-0 sudo[289995]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:31 compute-0 sudo[290020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:20:31 compute-0 sudo[290020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:32 compute-0 nova_compute[254819]: 2025-12-06 10:20:32.073 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:20:32 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.152190042 +0000 UTC m=+0.040233676 container create d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:32 compute-0 systemd[1]: Started libpod-conmon-d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e.scope.
Dec 06 10:20:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.134028448 +0000 UTC m=+0.022072112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.236294711 +0000 UTC m=+0.124338355 container init d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.247801945 +0000 UTC m=+0.135845599 container start d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.252105852 +0000 UTC m=+0.140149506 container attach d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 10:20:32 compute-0 ecstatic_hoover[290102]: 167 167
Dec 06 10:20:32 compute-0 systemd[1]: libpod-d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e.scope: Deactivated successfully.
Dec 06 10:20:32 compute-0 conmon[290102]: conmon d94e7653fc691db31c67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e.scope/container/memory.events
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.255576366 +0000 UTC m=+0.143620000 container died d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c317c70e47d5fbc959ee91b19b24962a3158be225ec7f98fb14680022be7ee8a-merged.mount: Deactivated successfully.
Dec 06 10:20:32 compute-0 podman[290086]: 2025-12-06 10:20:32.297405945 +0000 UTC m=+0.185449589 container remove d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:20:32 compute-0 systemd[1]: libpod-conmon-d94e7653fc691db31c671e86da95a940d9d002c57c5b43c47c82b3c4e4a8fc8e.scope: Deactivated successfully.
Dec 06 10:20:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:20:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:32.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:20:32 compute-0 podman[290127]: 2025-12-06 10:20:32.479106621 +0000 UTC m=+0.065042572 container create 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:20:32 compute-0 systemd[1]: Started libpod-conmon-5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13.scope.
Dec 06 10:20:32 compute-0 podman[290127]: 2025-12-06 10:20:32.446213175 +0000 UTC m=+0.032149216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:32 compute-0 podman[290127]: 2025-12-06 10:20:32.572387039 +0000 UTC m=+0.158323020 container init 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:20:32 compute-0 podman[290127]: 2025-12-06 10:20:32.581868328 +0000 UTC m=+0.167804279 container start 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 06 10:20:32 compute-0 podman[290127]: 2025-12-06 10:20:32.585924188 +0000 UTC m=+0.171860159 container attach 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Dec 06 10:20:32 compute-0 modest_curran[290144]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:20:32 compute-0 modest_curran[290144]: --> All data devices are unavailable
Dec 06 10:20:32 compute-0 systemd[1]: libpod-5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13.scope: Deactivated successfully.
Dec 06 10:20:33 compute-0 podman[290159]: 2025-12-06 10:20:33.010994889 +0000 UTC m=+0.041351247 container died 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbbcfa2688e5149d220425b69192244b606a5d2e4c84fd834f2fd8857e21420d-merged.mount: Deactivated successfully.
Dec 06 10:20:33 compute-0 podman[290159]: 2025-12-06 10:20:33.055087489 +0000 UTC m=+0.085443837 container remove 5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_curran, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:20:33 compute-0 systemd[1]: libpod-conmon-5de8a200cc2f37edb39bae6aa8dcaf144308e7bed62f86efeebdc0792e8f4c13.scope: Deactivated successfully.
Dec 06 10:20:33 compute-0 sudo[290020]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:33 compute-0 ceph-mon[74327]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:33 compute-0 sudo[290174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:20:33 compute-0 sudo[290174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:33 compute-0 sudo[290174]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:33 compute-0 sudo[290199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:20:33 compute-0 sudo[290199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.614873546 +0000 UTC m=+0.041725417 container create abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:20:33 compute-0 systemd[1]: Started libpod-conmon-abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a.scope.
Dec 06 10:20:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.598344486 +0000 UTC m=+0.025196377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.705159544 +0000 UTC m=+0.132011435 container init abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.711357172 +0000 UTC m=+0.138209043 container start abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.714914179 +0000 UTC m=+0.141766050 container attach abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:20:33 compute-0 suspicious_poincare[290279]: 167 167
Dec 06 10:20:33 compute-0 systemd[1]: libpod-abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a.scope: Deactivated successfully.
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.717393786 +0000 UTC m=+0.144245657 container died abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5372618bc1d163ecd5f92d8a7df07c885928540e94ad3a549032e7fd5f3ae640-merged.mount: Deactivated successfully.
Dec 06 10:20:33 compute-0 podman[290263]: 2025-12-06 10:20:33.762054442 +0000 UTC m=+0.188906313 container remove abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:20:33 compute-0 systemd[1]: libpod-conmon-abe15ca7d75375bdda7ffd484b8d5985c0ea23eab341a381eb6d2b23fc1df55a.scope: Deactivated successfully.
Dec 06 10:20:33 compute-0 podman[290306]: 2025-12-06 10:20:33.950204543 +0000 UTC m=+0.061294979 container create e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:20:33 compute-0 systemd[1]: Started libpod-conmon-e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f.scope.
Dec 06 10:20:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:33.92106599 +0000 UTC m=+0.032156466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a65c2c39741b68ec2a313cf08f00020cdac2ae5ca7a6c5b7bdb1aeb2d3b1025/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a65c2c39741b68ec2a313cf08f00020cdac2ae5ca7a6c5b7bdb1aeb2d3b1025/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a65c2c39741b68ec2a313cf08f00020cdac2ae5ca7a6c5b7bdb1aeb2d3b1025/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a65c2c39741b68ec2a313cf08f00020cdac2ae5ca7a6c5b7bdb1aeb2d3b1025/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:34.057214346 +0000 UTC m=+0.168304752 container init e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:34.067941738 +0000 UTC m=+0.179032144 container start e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:34.07165772 +0000 UTC m=+0.182748136 container attach e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:20:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:34 compute-0 nova_compute[254819]: 2025-12-06 10:20:34.298 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:34 compute-0 angry_margulis[290322]: {
Dec 06 10:20:34 compute-0 angry_margulis[290322]:     "1": [
Dec 06 10:20:34 compute-0 angry_margulis[290322]:         {
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "devices": [
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "/dev/loop3"
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             ],
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "lv_name": "ceph_lv0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "lv_size": "21470642176",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "name": "ceph_lv0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "tags": {
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.cluster_name": "ceph",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.crush_device_class": "",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.encrypted": "0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.osd_id": "1",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.type": "block",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.vdo": "0",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:                 "ceph.with_tpm": "0"
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             },
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "type": "block",
Dec 06 10:20:34 compute-0 angry_margulis[290322]:             "vg_name": "ceph_vg0"
Dec 06 10:20:34 compute-0 angry_margulis[290322]:         }
Dec 06 10:20:34 compute-0 angry_margulis[290322]:     ]
Dec 06 10:20:34 compute-0 angry_margulis[290322]: }
Dec 06 10:20:34 compute-0 systemd[1]: libpod-e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f.scope: Deactivated successfully.
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:34.370965406 +0000 UTC m=+0.482055802 container died e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a65c2c39741b68ec2a313cf08f00020cdac2ae5ca7a6c5b7bdb1aeb2d3b1025-merged.mount: Deactivated successfully.
Dec 06 10:20:34 compute-0 podman[290306]: 2025-12-06 10:20:34.407516792 +0000 UTC m=+0.518607188 container remove e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:20:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:34 compute-0 systemd[1]: libpod-conmon-e1c500fc3a3f0b003ce9cbe02deb945330688e56e8eab9a196f157ec48decd8f.scope: Deactivated successfully.
Dec 06 10:20:34 compute-0 sudo[290199]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:34 compute-0 sudo[290341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:20:34 compute-0 sudo[290341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:34 compute-0 sudo[290341]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:34 compute-0 sudo[290366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:20:34 compute-0 sudo[290366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.095318302 +0000 UTC m=+0.053310642 container create c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:20:35 compute-0 ceph-mon[74327]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:35 compute-0 systemd[1]: Started libpod-conmon-c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423.scope.
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.069956572 +0000 UTC m=+0.027948952 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.199847167 +0000 UTC m=+0.157839507 container init c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.210086197 +0000 UTC m=+0.168078527 container start c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.212875193 +0000 UTC m=+0.170867613 container attach c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 06 10:20:35 compute-0 amazing_cori[290450]: 167 167
Dec 06 10:20:35 compute-0 systemd[1]: libpod-c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423.scope: Deactivated successfully.
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.21607951 +0000 UTC m=+0.174071840 container died c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f9c40c23fafc492a8b5c64fae6a0b0ef95def9b4856463abc2738970e39853-merged.mount: Deactivated successfully.
Dec 06 10:20:35 compute-0 podman[290433]: 2025-12-06 10:20:35.248643606 +0000 UTC m=+0.206635936 container remove c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 10:20:35 compute-0 systemd[1]: libpod-conmon-c9643e6481836bd61e67de9157ea9352c5ad2703af3764877943eb6d364b2423.scope: Deactivated successfully.
Dec 06 10:20:35 compute-0 podman[290474]: 2025-12-06 10:20:35.400062028 +0000 UTC m=+0.040228557 container create ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 10:20:35 compute-0 systemd[1]: Started libpod-conmon-ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098.scope.
Dec 06 10:20:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe8844f320310a0453b6981c94356900008b0ac31986db9c1a3a215778ef6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe8844f320310a0453b6981c94356900008b0ac31986db9c1a3a215778ef6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe8844f320310a0453b6981c94356900008b0ac31986db9c1a3a215778ef6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe8844f320310a0453b6981c94356900008b0ac31986db9c1a3a215778ef6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:35 compute-0 podman[290474]: 2025-12-06 10:20:35.383695502 +0000 UTC m=+0.023862051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:20:35 compute-0 podman[290474]: 2025-12-06 10:20:35.485857423 +0000 UTC m=+0.126023972 container init ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:20:35 compute-0 podman[290474]: 2025-12-06 10:20:35.493032349 +0000 UTC m=+0.133198878 container start ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:20:35 compute-0 podman[290474]: 2025-12-06 10:20:35.49640791 +0000 UTC m=+0.136574459 container attach ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:20:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:36 compute-0 ceph-mon[74327]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:36 compute-0 lvm[290567]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:20:36 compute-0 lvm[290567]: VG ceph_vg0 finished
Dec 06 10:20:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:36 compute-0 peaceful_saha[290491]: {}
Dec 06 10:20:36 compute-0 systemd[1]: libpod-ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098.scope: Deactivated successfully.
Dec 06 10:20:36 compute-0 systemd[1]: libpod-ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098.scope: Consumed 1.107s CPU time.
Dec 06 10:20:36 compute-0 podman[290474]: 2025-12-06 10:20:36.238365396 +0000 UTC m=+0.878531935 container died ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:20:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcfe8844f320310a0453b6981c94356900008b0ac31986db9c1a3a215778ef6f-merged.mount: Deactivated successfully.
Dec 06 10:20:36 compute-0 podman[290474]: 2025-12-06 10:20:36.279526687 +0000 UTC m=+0.919693216 container remove ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_saha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:20:36 compute-0 systemd[1]: libpod-conmon-ab3240cc45575b45049be4e7bb1f961fb6ebaccfdec5e360de355ef65a722098.scope: Deactivated successfully.
Dec 06 10:20:36 compute-0 sudo[290366]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:20:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:20:36 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:36.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:36 compute-0 sudo[290583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:20:36 compute-0 sudo[290584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:20:36 compute-0 sudo[290583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:36 compute-0 sudo[290584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:36 compute-0 sudo[290583]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:36 compute-0 sudo[290584]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:37 compute-0 nova_compute[254819]: 2025-12-06 10:20:37.075 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:37 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:20:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:37.703Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:20:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:37.703Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:20:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:37.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:20:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:38.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:38.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:38 compute-0 ceph-mon[74327]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:20:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:39 compute-0 nova_compute[254819]: 2025-12-06 10:20:39.302 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:40 compute-0 ceph-mon[74327]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:20:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:20:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:42 compute-0 nova_compute[254819]: 2025-12-06 10:20:42.078 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:43 compute-0 ceph-mon[74327]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:44 compute-0 nova_compute[254819]: 2025-12-06 10:20:44.349 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:45 compute-0 ceph-mon[74327]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:20:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190015806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:20:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:20:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190015806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:20:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:46 compute-0 ceph-mon[74327]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4190015806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:20:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/4190015806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:20:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:47 compute-0 nova_compute[254819]: 2025-12-06 10:20:47.080 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:47.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:20:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:48.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:49 compute-0 ceph-mon[74327]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:49 compute-0 nova_compute[254819]: 2025-12-06 10:20:49.352 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:49 compute-0 nova_compute[254819]: 2025-12-06 10:20:49.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:50.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:50.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.777 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.778 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.779 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:20:50 compute-0 nova_compute[254819]: 2025-12-06 10:20:50.779 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:20:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:20:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:20:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:20:51 compute-0 ceph-mon[74327]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:20:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912669683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.235 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.429 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.430 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4452MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.431 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.431 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:51 compute-0 podman[290669]: 2025-12-06 10:20:51.462338497 +0000 UTC m=+0.080466132 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.486 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.486 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.676 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.843 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.844 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.858 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.876 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:20:51 compute-0 nova_compute[254819]: 2025-12-06 10:20:51.899 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:20:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.080 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1912669683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3996973664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:20:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232808873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.308 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.314 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.335 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.336 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:20:52 compute-0 nova_compute[254819]: 2025-12-06 10:20:52.336 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:52.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:53 compute-0 ceph-mon[74327]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2232808873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/225600293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:20:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:20:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:20:54 compute-0 ceph-mon[74327]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:54.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:20:54.248 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:20:54.249 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:20:54.249 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:54 compute-0 nova_compute[254819]: 2025-12-06 10:20:54.336 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:54 compute-0 nova_compute[254819]: 2025-12-06 10:20:54.397 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:55 compute-0 nova_compute[254819]: 2025-12-06 10:20:55.751 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:20:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:56.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:20:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:20:56 compute-0 podman[290718]: 2025-12-06 10:20:56.473632482 +0000 UTC m=+0.104642499 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec 06 10:20:56 compute-0 sudo[290743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:20:56 compute-0 sudo[290743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:20:56 compute-0 sudo[290743]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.770 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.771 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:56 compute-0 nova_compute[254819]: 2025-12-06 10:20:56.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:20:57 compute-0 nova_compute[254819]: 2025-12-06 10:20:57.084 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:57 compute-0 ceph-mon[74327]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:20:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:57.705Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:20:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:57.705Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:20:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:20:57.705Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:20:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:20:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:20:58.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:20:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:20:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:20:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:20:58.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:20:58 compute-0 nova_compute[254819]: 2025-12-06 10:20:58.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:59 compute-0 ceph-mon[74327]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:20:59 compute-0 nova_compute[254819]: 2025-12-06 10:20:59.399 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:59 compute-0 nova_compute[254819]: 2025-12-06 10:20:59.924 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4133591868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2168834215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:00.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:00 compute-0 nova_compute[254819]: 2025-12-06 10:21:00.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:00 compute-0 nova_compute[254819]: 2025-12-06 10:21:00.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:21:00 compute-0 nova_compute[254819]: 2025-12-06 10:21:00.767 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:21:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:21:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:21:01 compute-0 ceph-mon[74327]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:01 compute-0 podman[290774]: 2025-12-06 10:21:01.471716418 +0000 UTC m=+0.080654227 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:21:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:02 compute-0 nova_compute[254819]: 2025-12-06 10:21:02.085 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:02 compute-0 ceph-mon[74327]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:02.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:04.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:04 compute-0 nova_compute[254819]: 2025-12-06 10:21:04.437 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:05 compute-0 ceph-mon[74327]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:06.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:06.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:07 compute-0 nova_compute[254819]: 2025-12-06 10:21:07.087 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:07 compute-0 ceph-mon[74327]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.127007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467127067, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2485, "num_deletes": 508, "total_data_size": 4205243, "memory_usage": 4298768, "flush_reason": "Manual Compaction"}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467166861, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 4104756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31958, "largest_seqno": 34442, "table_properties": {"data_size": 4093704, "index_size": 6586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27198, "raw_average_key_size": 20, "raw_value_size": 4069181, "raw_average_value_size": 3023, "num_data_blocks": 283, "num_entries": 1346, "num_filter_entries": 1346, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016261, "oldest_key_time": 1765016261, "file_creation_time": 1765016467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 39910 microseconds, and 13799 cpu microseconds.
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.166922) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 4104756 bytes OK
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.166950) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.168319) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.168339) EVENT_LOG_v1 {"time_micros": 1765016467168332, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.168364) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4193736, prev total WAL file size 4193736, number of live WAL files 2.
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.170269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(4008KB)], [68(14MB)]
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467170386, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 19240975, "oldest_snapshot_seqno": -1}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6838 keys, 17003661 bytes, temperature: kUnknown
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467319557, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 17003661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16955980, "index_size": 29457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 176613, "raw_average_key_size": 25, "raw_value_size": 16831185, "raw_average_value_size": 2461, "num_data_blocks": 1183, "num_entries": 6838, "num_filter_entries": 6838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.320008) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 17003661 bytes
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.321814) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 114.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 14.4 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(8.8) write-amplify(4.1) OK, records in: 7873, records dropped: 1035 output_compression: NoCompression
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.321866) EVENT_LOG_v1 {"time_micros": 1765016467321836, "job": 38, "event": "compaction_finished", "compaction_time_micros": 149096, "compaction_time_cpu_micros": 35696, "output_level": 6, "num_output_files": 1, "total_output_size": 17003661, "num_input_records": 7873, "num_output_records": 6838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467323551, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016467329310, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.170053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.329363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.329370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.329373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.329376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:07.329379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:07.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:21:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:08 compute-0 ceph-mon[74327]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:21:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:09 compute-0 nova_compute[254819]: 2025-12-06 10:21:09.440 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:10.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:10 compute-0 ceph-mon[74327]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:10.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:21:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:21:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:12 compute-0 nova_compute[254819]: 2025-12-06 10:21:12.089 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:12.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:13 compute-0 ceph-mon[74327]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:14.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:14 compute-0 nova_compute[254819]: 2025-12-06 10:21:14.444 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:15 compute-0 ceph-mon[74327]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:16.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:16 compute-0 sudo[290809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:21:16 compute-0 sudo[290809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:16 compute-0 sudo[290809]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:17 compute-0 nova_compute[254819]: 2025-12-06 10:21:17.092 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:17 compute-0 ceph-mon[74327]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:17.707Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:21:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:17.707Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:21:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:18 compute-0 ceph-mon[74327]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:18.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:18.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:19 compute-0 nova_compute[254819]: 2025-12-06 10:21:19.446 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.026214) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480026714, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 251, "total_data_size": 220126, "memory_usage": 226224, "flush_reason": "Manual Compaction"}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 06 10:21:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480031371, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 217819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34443, "largest_seqno": 34791, "table_properties": {"data_size": 215646, "index_size": 337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5932, "raw_average_key_size": 20, "raw_value_size": 211357, "raw_average_value_size": 721, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016468, "oldest_key_time": 1765016468, "file_creation_time": 1765016480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 5225 microseconds, and 1990 cpu microseconds.
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.031435) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 217819 bytes OK
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.031469) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.034620) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.034649) EVENT_LOG_v1 {"time_micros": 1765016480034641, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.034671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 217792, prev total WAL file size 217792, number of live WAL files 2.
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.035109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(212KB)], [71(16MB)]
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480035137, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17221480, "oldest_snapshot_seqno": -1}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6621 keys, 13144940 bytes, temperature: kUnknown
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480154055, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 13144940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13103526, "index_size": 23766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172285, "raw_average_key_size": 26, "raw_value_size": 12987107, "raw_average_value_size": 1961, "num_data_blocks": 945, "num_entries": 6621, "num_filter_entries": 6621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.154471) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 13144940 bytes
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.156109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.7 rd, 110.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 16.2 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(139.4) write-amplify(60.3) OK, records in: 7131, records dropped: 510 output_compression: NoCompression
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.156145) EVENT_LOG_v1 {"time_micros": 1765016480156129, "job": 40, "event": "compaction_finished", "compaction_time_micros": 119053, "compaction_time_cpu_micros": 28121, "output_level": 6, "num_output_files": 1, "total_output_size": 13144940, "num_input_records": 7131, "num_output_records": 6621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480156467, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016480162592, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.035062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.162727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.162734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.162737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.162739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:21:20.162741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:21:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:20.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:21:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:20] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:21:21 compute-0 ceph-mon[74327]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:22 compute-0 nova_compute[254819]: 2025-12-06 10:21:22.094 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:22 compute-0 podman[290840]: 2025-12-06 10:21:22.448849445 +0000 UTC m=+0.087098632 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:21:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:23 compute-0 ceph-mon[74327]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:21:23
Dec 06 10:21:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:21:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:21:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control']
Dec 06 10:21:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:21:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:21:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:24.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:24 compute-0 nova_compute[254819]: 2025-12-06 10:21:24.450 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:21:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:24.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:21:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:21:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:25 compute-0 ceph-mon[74327]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:26.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:26 compute-0 ceph-mon[74327]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:27 compute-0 nova_compute[254819]: 2025-12-06 10:21:27.096 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:27 compute-0 podman[290864]: 2025-12-06 10:21:27.487257279 +0000 UTC m=+0.116376338 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:27.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:21:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:28.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:29 compute-0 ceph-mon[74327]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:29 compute-0 nova_compute[254819]: 2025-12-06 10:21:29.453 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:30.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:21:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:21:31 compute-0 ceph-mon[74327]: pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:32 compute-0 nova_compute[254819]: 2025-12-06 10:21:32.100 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:32 compute-0 ceph-mon[74327]: pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:32 compute-0 rsyslogd[1004]: imjournal: 2040 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 06 10:21:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:32 compute-0 podman[290897]: 2025-12-06 10:21:32.442987943 +0000 UTC m=+0.065994967 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:21:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:32.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:34 compute-0 nova_compute[254819]: 2025-12-06 10:21:34.457 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:34.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:35 compute-0 ceph-mon[74327]: pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:35 compute-0 nova_compute[254819]: 2025-12-06 10:21:35.339 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:36 compute-0 sudo[290920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:21:36 compute-0 sudo[290920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:36 compute-0 sudo[290920]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:36 compute-0 sudo[290929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:21:36 compute-0 sudo[290929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:36 compute-0 sudo[290929]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:36 compute-0 sudo[290969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:21:36 compute-0 sudo[290969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:37 compute-0 nova_compute[254819]: 2025-12-06 10:21:37.101 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:37 compute-0 ceph-mon[74327]: pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:37 compute-0 sudo[290969]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:21:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:21:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:21:37 compute-0 sudo[291028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:21:37 compute-0 sudo[291028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:37 compute-0 sudo[291028]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:37 compute-0 sudo[291053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:21:37 compute-0 sudo[291053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:37.710Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.870064956 +0000 UTC m=+0.057538678 container create 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 10:21:37 compute-0 systemd[1]: Started libpod-conmon-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope.
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.839569156 +0000 UTC m=+0.027042938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.957314991 +0000 UTC m=+0.144788723 container init 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.965893674 +0000 UTC m=+0.153367346 container start 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.969129032 +0000 UTC m=+0.156602744 container attach 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:37 compute-0 elastic_liskov[291137]: 167 167
Dec 06 10:21:37 compute-0 systemd[1]: libpod-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope: Deactivated successfully.
Dec 06 10:21:37 compute-0 conmon[291137]: conmon 78f7dc750d1ece00c243 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope/container/memory.events
Dec 06 10:21:37 compute-0 podman[291120]: 2025-12-06 10:21:37.97642572 +0000 UTC m=+0.163899402 container died 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9578f9aa551dad06e43a90725efa4304bc1bdd8a9c5ab7752440751f45ba22b1-merged.mount: Deactivated successfully.
Dec 06 10:21:38 compute-0 podman[291120]: 2025-12-06 10:21:38.030543294 +0000 UTC m=+0.218016996 container remove 78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_liskov, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:21:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:38 compute-0 systemd[1]: libpod-conmon-78f7dc750d1ece00c243202615ec068aa2c9e7c7671d7d9fbff9c3c8c8f7c4f5.scope: Deactivated successfully.
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:21:38 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:21:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.289265946 +0000 UTC m=+0.063977363 container create a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:38 compute-0 systemd[1]: Started libpod-conmon-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope.
Dec 06 10:21:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.268417749 +0000 UTC m=+0.043129196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.376436859 +0000 UTC m=+0.151148366 container init a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.392187738 +0000 UTC m=+0.166899155 container start a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.396298359 +0000 UTC m=+0.171009776 container attach a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:38 compute-0 naughty_mirzakhani[291176]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:21:38 compute-0 naughty_mirzakhani[291176]: --> All data devices are unavailable
Dec 06 10:21:38 compute-0 systemd[1]: libpod-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope: Deactivated successfully.
Dec 06 10:21:38 compute-0 conmon[291176]: conmon a834bc48ef3a7bb7632d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope/container/memory.events
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.793325836 +0000 UTC m=+0.568037293 container died a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d344afdf5bcdb51426d7bd2772343a9f37649641ce2cab8f7ac2b3d5cfcc37c-merged.mount: Deactivated successfully.
Dec 06 10:21:38 compute-0 podman[291160]: 2025-12-06 10:21:38.840345786 +0000 UTC m=+0.615057193 container remove a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 06 10:21:38 compute-0 systemd[1]: libpod-conmon-a834bc48ef3a7bb7632d96047ee13909fb479719c5515815da6aff0fa9145612.scope: Deactivated successfully.
Dec 06 10:21:38 compute-0 sudo[291053]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:38 compute-0 sudo[291205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:21:38 compute-0 sudo[291205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:38 compute-0 sudo[291205]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:21:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:38 compute-0 sudo[291230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:21:38 compute-0 sudo[291230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:39 compute-0 ceph-mon[74327]: pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.364710129 +0000 UTC m=+0.042673892 container create dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:21:39 compute-0 systemd[1]: Started libpod-conmon-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope.
Dec 06 10:21:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.345532067 +0000 UTC m=+0.023495830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.444338496 +0000 UTC m=+0.122302269 container init dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.45033481 +0000 UTC m=+0.128298553 container start dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.454442572 +0000 UTC m=+0.132406645 container attach dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:21:39 compute-0 gallant_lichterman[291313]: 167 167
Dec 06 10:21:39 compute-0 systemd[1]: libpod-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope: Deactivated successfully.
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.456828457 +0000 UTC m=+0.134792220 container died dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 10:21:39 compute-0 nova_compute[254819]: 2025-12-06 10:21:39.461 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-15c3a16704ee358001658fc16cdc7ee50faf400914ce2a38f3a086fb7b9b980e-merged.mount: Deactivated successfully.
Dec 06 10:21:39 compute-0 podman[291296]: 2025-12-06 10:21:39.493986898 +0000 UTC m=+0.171950641 container remove dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lichterman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec 06 10:21:39 compute-0 systemd[1]: libpod-conmon-dd83952e8d273c71bf6683c2c070cf23db910410d83dd20faf5eb5b893065616.scope: Deactivated successfully.
Dec 06 10:21:39 compute-0 podman[291336]: 2025-12-06 10:21:39.66707973 +0000 UTC m=+0.048152982 container create 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:21:39 compute-0 systemd[1]: Started libpod-conmon-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope.
Dec 06 10:21:39 compute-0 podman[291336]: 2025-12-06 10:21:39.644718561 +0000 UTC m=+0.025791823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:39 compute-0 podman[291336]: 2025-12-06 10:21:39.76481866 +0000 UTC m=+0.145891962 container init 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 10:21:39 compute-0 podman[291336]: 2025-12-06 10:21:39.77585329 +0000 UTC m=+0.156926512 container start 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:39 compute-0 podman[291336]: 2025-12-06 10:21:39.779761246 +0000 UTC m=+0.160834548 container attach 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 06 10:21:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:40 compute-0 strange_ritchie[291354]: {
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:     "1": [
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:         {
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "devices": [
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "/dev/loop3"
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             ],
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "lv_name": "ceph_lv0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "lv_size": "21470642176",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "name": "ceph_lv0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "tags": {
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.cluster_name": "ceph",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.crush_device_class": "",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.encrypted": "0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.osd_id": "1",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.type": "block",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.vdo": "0",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:                 "ceph.with_tpm": "0"
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             },
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "type": "block",
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:             "vg_name": "ceph_vg0"
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:         }
Dec 06 10:21:40 compute-0 strange_ritchie[291354]:     ]
Dec 06 10:21:40 compute-0 strange_ritchie[291354]: }
Dec 06 10:21:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:40 compute-0 systemd[1]: libpod-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope: Deactivated successfully.
Dec 06 10:21:40 compute-0 podman[291336]: 2025-12-06 10:21:40.067888929 +0000 UTC m=+0.448962141 container died 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7e8a15da444f9409b64cc59c3fa3e8b41633563fb9fc508475ca376bdeb3f28-merged.mount: Deactivated successfully.
Dec 06 10:21:40 compute-0 podman[291336]: 2025-12-06 10:21:40.128553011 +0000 UTC m=+0.509626223 container remove 5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_ritchie, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 10:21:40 compute-0 systemd[1]: libpod-conmon-5ea816e6eadb89588ec2743a25b04bad2bb2e892b9ce8b443960911b1314024e.scope: Deactivated successfully.
Dec 06 10:21:40 compute-0 ceph-mon[74327]: pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:40 compute-0 sudo[291230]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:40 compute-0 sudo[291375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:21:40 compute-0 sudo[291375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:40 compute-0 sudo[291375]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:40.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:40 compute-0 sudo[291400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:21:40 compute-0 sudo[291400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:40 compute-0 podman[291467]: 2025-12-06 10:21:40.723989568 +0000 UTC m=+0.042955610 container create 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 06 10:21:40 compute-0 systemd[1]: Started libpod-conmon-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope.
Dec 06 10:21:40 compute-0 podman[291467]: 2025-12-06 10:21:40.704322133 +0000 UTC m=+0.023288215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:40 compute-0 podman[291467]: 2025-12-06 10:21:40.827915507 +0000 UTC m=+0.146881649 container init 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:21:40 compute-0 podman[291467]: 2025-12-06 10:21:40.836603173 +0000 UTC m=+0.155569215 container start 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:40 compute-0 podman[291467]: 2025-12-06 10:21:40.839987776 +0000 UTC m=+0.158953928 container attach 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:21:40 compute-0 pedantic_feynman[291483]: 167 167
Dec 06 10:21:40 compute-0 systemd[1]: libpod-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope: Deactivated successfully.
Dec 06 10:21:40 compute-0 conmon[291483]: conmon 4d303a67b5a01721ecc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope/container/memory.events
Dec 06 10:21:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:21:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:40] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:21:40 compute-0 podman[291488]: 2025-12-06 10:21:40.906971529 +0000 UTC m=+0.042872808 container died 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-570760ce12b8d998561e5c4629b2712bfa8dfb55a6763b545e981e73579a2ff0-merged.mount: Deactivated successfully.
Dec 06 10:21:40 compute-0 podman[291488]: 2025-12-06 10:21:40.950068832 +0000 UTC m=+0.085970091 container remove 4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:21:40 compute-0 systemd[1]: libpod-conmon-4d303a67b5a01721ecc5fb23199045b702c77e93a6e75e3070fc1ef3454bb2ab.scope: Deactivated successfully.
Dec 06 10:21:41 compute-0 podman[291510]: 2025-12-06 10:21:41.167792499 +0000 UTC m=+0.049851219 container create 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:41 compute-0 systemd[1]: Started libpod-conmon-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope.
Dec 06 10:21:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:21:41 compute-0 podman[291510]: 2025-12-06 10:21:41.14836755 +0000 UTC m=+0.030426260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:41 compute-0 podman[291510]: 2025-12-06 10:21:41.26080155 +0000 UTC m=+0.142860330 container init 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:21:41 compute-0 podman[291510]: 2025-12-06 10:21:41.274891494 +0000 UTC m=+0.156950194 container start 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec 06 10:21:41 compute-0 podman[291510]: 2025-12-06 10:21:41.278860981 +0000 UTC m=+0.160919701 container attach 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 10:21:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:42 compute-0 lvm[291602]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:21:42 compute-0 lvm[291602]: VG ceph_vg0 finished
Dec 06 10:21:42 compute-0 friendly_wilbur[291526]: {}
Dec 06 10:21:42 compute-0 nova_compute[254819]: 2025-12-06 10:21:42.105 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:42 compute-0 systemd[1]: libpod-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Deactivated successfully.
Dec 06 10:21:42 compute-0 systemd[1]: libpod-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Consumed 1.435s CPU time.
Dec 06 10:21:42 compute-0 podman[291510]: 2025-12-06 10:21:42.138178402 +0000 UTC m=+1.020237082 container died 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec 06 10:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-49a2f3c9586531edc962846228337e11e163769b5fb26187b6afc4be00d195d9-merged.mount: Deactivated successfully.
Dec 06 10:21:42 compute-0 podman[291510]: 2025-12-06 10:21:42.185554121 +0000 UTC m=+1.067612801 container remove 9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:21:42 compute-0 systemd[1]: libpod-conmon-9a0ed3b11949b6fedd06b9aca0b70360192397831cb85cf0c6ba7575abcdcf10.scope: Deactivated successfully.
Dec 06 10:21:42 compute-0 sudo[291400]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:21:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:21:42 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:21:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:42.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:21:42 compute-0 sudo[291620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:21:42 compute-0 sudo[291620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:42 compute-0 sudo[291620]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:43 compute-0 ceph-mon[74327]: pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:21:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:44 compute-0 nova_compute[254819]: 2025-12-06 10:21:44.464 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:45 compute-0 ceph-mon[74327]: pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2229648088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:21:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2229648088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:21:46 compute-0 ceph-mon[74327]: pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:47 compute-0 nova_compute[254819]: 2025-12-06 10:21:47.109 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:21:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:21:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:47.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:21:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:48.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:49 compute-0 ceph-mon[74327]: pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:49 compute-0 nova_compute[254819]: 2025-12-06 10:21:49.468 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.801 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.801 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.802 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.802 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:21:50 compute-0 nova_compute[254819]: 2025-12-06 10:21:50.803 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:21:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:21:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:21:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:21:51 compute-0 ceph-mon[74327]: pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:51 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:21:51 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012262960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.316 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.489 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.491 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.491 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.492 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.578 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.579 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:21:51 compute-0 nova_compute[254819]: 2025-12-06 10:21:51.596 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:21:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:21:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514334911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.058 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.064 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.082 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.083 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.083 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:52 compute-0 nova_compute[254819]: 2025-12-06 10:21:52.111 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3012262960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:52 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/514334911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:21:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:21:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:53 compute-0 ceph-mon[74327]: pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3576677501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:53 compute-0 podman[291699]: 2025-12-06 10:21:53.429264462 +0000 UTC m=+0.053333903 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:21:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:21:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1265861855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:21:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:21:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.249 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:21:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:54 compute-0 nova_compute[254819]: 2025-12-06 10:21:54.473 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:55 compute-0 ceph-mon[74327]: pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.084 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.085 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:56 compute-0 ceph-mon[74327]: pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:21:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:21:56 compute-0 nova_compute[254819]: 2025-12-06 10:21:56.773 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:21:56 compute-0 sudo[291723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:21:56 compute-0 sudo[291723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:21:56 compute-0 sudo[291723]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:57 compute-0 nova_compute[254819]: 2025-12-06 10:21:57.113 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:21:57.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:21:57 compute-0 nova_compute[254819]: 2025-12-06 10:21:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:57 compute-0 nova_compute[254819]: 2025-12-06 10:21:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:57 compute-0 nova_compute[254819]: 2025-12-06 10:21:57.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:21:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:21:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:21:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:21:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:21:58 compute-0 podman[291750]: 2025-12-06 10:21:58.478644955 +0000 UTC m=+0.106085439 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 10:21:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:21:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:21:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:21:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:21:59 compute-0 nova_compute[254819]: 2025-12-06 10:21:59.475 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:59 compute-0 ceph-mon[74327]: pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 06 10:22:00 compute-0 ceph-mon[74327]: pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:00 compute-0 radosgw[94308]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 06 10:22:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:00] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:02 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3408085492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:02 compute-0 nova_compute[254819]: 2025-12-06 10:22:02.114 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:03 compute-0 ceph-mon[74327]: pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:03 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3417508859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:03 compute-0 podman[291780]: 2025-12-06 10:22:03.4539136 +0000 UTC m=+0.082613739 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:22:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec 06 10:22:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:04 compute-0 ceph-mon[74327]: pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec 06 10:22:04 compute-0 nova_compute[254819]: 2025-12-06 10:22:04.479 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 06 10:22:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:07 compute-0 nova_compute[254819]: 2025-12-06 10:22:07.116 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:07 compute-0 ceph-mon[74327]: pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 06 10:22:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:07.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec 06 10:22:08 compute-0 ceph-mon[74327]: pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec 06 10:22:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:22:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:09 compute-0 nova_compute[254819]: 2025-12-06 10:22:09.483 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec 06 10:22:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:10 compute-0 ceph-mon[74327]: pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec 06 10:22:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:10] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:10] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec 06 10:22:12 compute-0 nova_compute[254819]: 2025-12-06 10:22:12.117 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:12.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:12 compute-0 nova_compute[254819]: 2025-12-06 10:22:12.806 254824 DEBUG oslo_concurrency.processutils [None req-1b326720-1719-4a67-9e7f-ab0eb7cb97ad bcb29c3303b24519a22c267aaed79458 3e0ab101ca7547d4a515169a0f2edef3 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:22:12 compute-0 nova_compute[254819]: 2025-12-06 10:22:12.838 254824 DEBUG oslo_concurrency.processutils [None req-1b326720-1719-4a67-9e7f-ab0eb7cb97ad bcb29c3303b24519a22c267aaed79458 3e0ab101ca7547d4a515169a0f2edef3 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:22:13 compute-0 ceph-mon[74327]: pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Dec 06 10:22:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec 06 10:22:14 compute-0 ceph-mon[74327]: pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Dec 06 10:22:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:14 compute-0 nova_compute[254819]: 2025-12-06 10:22:14.485 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 06 10:22:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:16 compute-0 sudo[291817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:22:16 compute-0 sudo[291817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:16 compute-0 sudo[291817]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:17 compute-0 nova_compute[254819]: 2025-12-06 10:22:17.118 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:17 compute-0 ceph-mon[74327]: pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 06 10:22:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:17.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 06 10:22:18 compute-0 ceph-mon[74327]: pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Dec 06 10:22:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:18.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:19 compute-0 nova_compute[254819]: 2025-12-06 10:22:19.538 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:19.799 162267 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:dc:0d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'b6:0a:c4:b8:be:39'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:19 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:19.800 162267 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:22:19 compute-0 nova_compute[254819]: 2025-12-06 10:22:19.801 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Dec 06 10:22:21 compute-0 ceph-mon[74327]: pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:22 compute-0 nova_compute[254819]: 2025-12-06 10:22:22.155 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:22.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:23 compute-0 ceph-mon[74327]: pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:22:23
Dec 06 10:22:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:22:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:22:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.nfs', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Dec 06 10:22:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:22:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:22:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:24.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:24 compute-0 podman[291850]: 2025-12-06 10:22:24.451954257 +0000 UTC m=+0.076336161 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:22:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:22:24 compute-0 nova_compute[254819]: 2025-12-06 10:22:24.590 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:22:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:22:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:25 compute-0 ceph-mon[74327]: pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:26 compute-0 ceph-mon[74327]: pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:27 compute-0 nova_compute[254819]: 2025-12-06 10:22:27.156 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:27.716Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:29 compute-0 ceph-mon[74327]: pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:29 compute-0 podman[291874]: 2025-12-06 10:22:29.508068341 +0000 UTC m=+0.137856426 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 10:22:29 compute-0 nova_compute[254819]: 2025-12-06 10:22:29.591 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:29 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:29.801 162267 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d39b5be8-d4cf-41c7-9a64-1ee03801f4e1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:22:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:30.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:22:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:22:31 compute-0 ceph-mon[74327]: pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:32 compute-0 nova_compute[254819]: 2025-12-06 10:22:32.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:32.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:32.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:33 compute-0 ceph-mon[74327]: pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:34 compute-0 ceph-mon[74327]: pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:34 compute-0 podman[291908]: 2025-12-06 10:22:34.457149141 +0000 UTC m=+0.081342567 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:22:34 compute-0 nova_compute[254819]: 2025-12-06 10:22:34.594 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:22:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:22:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:36 compute-0 sudo[291930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:22:36 compute-0 sudo[291930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:36 compute-0 sudo[291930]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:37 compute-0 ceph-mon[74327]: pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:37 compute-0 nova_compute[254819]: 2025-12-06 10:22:37.161 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:37 compute-0 sshd-session[291929]: Received disconnect from 193.46.255.7 port 47262:11:  [preauth]
Dec 06 10:22:37 compute-0 sshd-session[291929]: Disconnected from authenticating user root 193.46.255.7 port 47262 [preauth]
Dec 06 10:22:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:37.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:22:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:39 compute-0 ceph-mon[74327]: pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:39 compute-0 nova_compute[254819]: 2025-12-06 10:22:39.598 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:40.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:40.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:22:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:40] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:22:41 compute-0 ceph-mon[74327]: pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:42 compute-0 nova_compute[254819]: 2025-12-06 10:22:42.163 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:42 compute-0 ceph-mon[74327]: pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:42 compute-0 sudo[291962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:42 compute-0 sudo[291962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:42 compute-0 sudo[291962]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:42 compute-0 sudo[291987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:22:42 compute-0 sudo[291987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:43 compute-0 sudo[291987]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:22:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:22:43 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:22:43 compute-0 sudo[292043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:43 compute-0 sudo[292043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:43 compute-0 sudo[292043]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:43 compute-0 sudo[292068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:22:43 compute-0 sudo[292068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.032406828 +0000 UTC m=+0.059522292 container create 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:44 compute-0 systemd[1]: Started libpod-conmon-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope.
Dec 06 10:22:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.015670242 +0000 UTC m=+0.042785726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.128468065 +0000 UTC m=+0.155583619 container init 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.135368332 +0000 UTC m=+0.162483836 container start 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.139893076 +0000 UTC m=+0.167008580 container attach 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 10:22:44 compute-0 blissful_grothendieck[292151]: 167 167
Dec 06 10:22:44 compute-0 systemd[1]: libpod-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope: Deactivated successfully.
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.143610247 +0000 UTC m=+0.170725711 container died 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 06 10:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c978dee7ec5b5dea5d0b51b0cd77d49c2e7ae6d3bce5163c24dbb4f85ac72fd-merged.mount: Deactivated successfully.
Dec 06 10:22:44 compute-0 podman[292135]: 2025-12-06 10:22:44.194862673 +0000 UTC m=+0.221978167 container remove 84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:22:44 compute-0 systemd[1]: libpod-conmon-84e53e0ac656ea769960ad8c59d16f64f101dde766da6b088b43096a68999b3c.scope: Deactivated successfully.
Dec 06 10:22:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:44.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.398274923 +0000 UTC m=+0.059690246 container create 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 10:22:44 compute-0 systemd[1]: Started libpod-conmon-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope.
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.373969261 +0000 UTC m=+0.035384624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:44 compute-0 ceph-mon[74327]: pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.492438497 +0000 UTC m=+0.153853820 container init 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.505246236 +0000 UTC m=+0.166661549 container start 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.508762692 +0000 UTC m=+0.170178005 container attach 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 10:22:44 compute-0 nova_compute[254819]: 2025-12-06 10:22:44.601 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:44.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:44 compute-0 eloquent_boyd[292191]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:22:44 compute-0 eloquent_boyd[292191]: --> All data devices are unavailable
Dec 06 10:22:44 compute-0 systemd[1]: libpod-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope: Deactivated successfully.
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.825405366 +0000 UTC m=+0.486820719 container died 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-567acb5db928477ec85263cc50d2a3039e377eac459439137e4c5ed70d549283-merged.mount: Deactivated successfully.
Dec 06 10:22:44 compute-0 podman[292175]: 2025-12-06 10:22:44.87291342 +0000 UTC m=+0.534328753 container remove 795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 06 10:22:44 compute-0 systemd[1]: libpod-conmon-795c6d2497d40cded2dc7ce647fd192b7bd2e7102339fa1af015b134612a4aec.scope: Deactivated successfully.
Dec 06 10:22:44 compute-0 sudo[292068]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:44 compute-0 sudo[292219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:44 compute-0 sudo[292219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:44 compute-0 sudo[292219]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:45 compute-0 sudo[292244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:22:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:45 compute-0 sudo[292244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.447275813 +0000 UTC m=+0.057686772 container create 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:22:45 compute-0 systemd[1]: Started libpod-conmon-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope.
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.418309144 +0000 UTC m=+0.028720153 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.557221058 +0000 UTC m=+0.167631997 container init 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.565101232 +0000 UTC m=+0.175512151 container start 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.56831698 +0000 UTC m=+0.178727899 container attach 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:45 compute-0 suspicious_chatterjee[292328]: 167 167
Dec 06 10:22:45 compute-0 systemd[1]: libpod-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope: Deactivated successfully.
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.573542842 +0000 UTC m=+0.183953771 container died 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac49f563a00066ac8ba994b762ae954377197e937cf2375760c977d2ec60990d-merged.mount: Deactivated successfully.
Dec 06 10:22:45 compute-0 podman[292312]: 2025-12-06 10:22:45.614282552 +0000 UTC m=+0.224693471 container remove 66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:22:45 compute-0 systemd[1]: libpod-conmon-66e069bc455b34ee23594bb9d3589e7320e18e34ecb9c7cfa7f0375adbfc6b6e.scope: Deactivated successfully.
Dec 06 10:22:45 compute-0 podman[292356]: 2025-12-06 10:22:45.816600402 +0000 UTC m=+0.052397419 container create 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 10:22:45 compute-0 systemd[1]: Started libpod-conmon-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope.
Dec 06 10:22:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:45 compute-0 podman[292356]: 2025-12-06 10:22:45.797569603 +0000 UTC m=+0.033366660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:45 compute-0 podman[292356]: 2025-12-06 10:22:45.894167714 +0000 UTC m=+0.129964761 container init 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 10:22:45 compute-0 podman[292356]: 2025-12-06 10:22:45.902114751 +0000 UTC m=+0.137911778 container start 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 10:22:45 compute-0 podman[292356]: 2025-12-06 10:22:45.905442662 +0000 UTC m=+0.141239839 container attach 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:22:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:22:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:22:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:22:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:22:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:46.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:22:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/142882542' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:22:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:46 compute-0 sweet_yonath[292373]: {
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:     "1": [
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:         {
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "devices": [
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "/dev/loop3"
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             ],
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "lv_name": "ceph_lv0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "lv_size": "21470642176",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "name": "ceph_lv0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "tags": {
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.cluster_name": "ceph",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.crush_device_class": "",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.encrypted": "0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.osd_id": "1",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.type": "block",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.vdo": "0",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:                 "ceph.with_tpm": "0"
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             },
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "type": "block",
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:             "vg_name": "ceph_vg0"
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:         }
Dec 06 10:22:46 compute-0 sweet_yonath[292373]:     ]
Dec 06 10:22:46 compute-0 sweet_yonath[292373]: }
Dec 06 10:22:46 compute-0 systemd[1]: libpod-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope: Deactivated successfully.
Dec 06 10:22:46 compute-0 conmon[292373]: conmon 2475192e9b4ed289b9ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope/container/memory.events
Dec 06 10:22:46 compute-0 podman[292356]: 2025-12-06 10:22:46.656957699 +0000 UTC m=+0.892754706 container died 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 10:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-22e3da0447bdc094d58d4ea8eb6d1124e231aea96f1ca869915c530df83cc7d6-merged.mount: Deactivated successfully.
Dec 06 10:22:46 compute-0 podman[292356]: 2025-12-06 10:22:46.691929722 +0000 UTC m=+0.927726749 container remove 2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_yonath, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 10:22:46 compute-0 systemd[1]: libpod-conmon-2475192e9b4ed289b9ae31845aafe1befe449995079f8aecc9f55c8f0306da27.scope: Deactivated successfully.
Dec 06 10:22:46 compute-0 sudo[292244]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:46 compute-0 sudo[292394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:46 compute-0 sudo[292394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:46 compute-0 sudo[292394]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:46 compute-0 sudo[292419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:22:46 compute-0 sudo[292419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:47 compute-0 nova_compute[254819]: 2025-12-06 10:22:47.164 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.262619114 +0000 UTC m=+0.035321322 container create 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:22:47 compute-0 systemd[1]: Started libpod-conmon-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope.
Dec 06 10:22:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.335697985 +0000 UTC m=+0.108400193 container init 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.247349619 +0000 UTC m=+0.020051847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.343412615 +0000 UTC m=+0.116114813 container start 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.347319342 +0000 UTC m=+0.120021560 container attach 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:22:47 compute-0 sleepy_yonath[292500]: 167 167
Dec 06 10:22:47 compute-0 systemd[1]: libpod-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope: Deactivated successfully.
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.349662525 +0000 UTC m=+0.122364723 container died 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-debdef475b0e690a29c01b9b2594a9f2ae63902259fec544a5b852540cbdceed-merged.mount: Deactivated successfully.
Dec 06 10:22:47 compute-0 podman[292484]: 2025-12-06 10:22:47.382183241 +0000 UTC m=+0.154885429 container remove 2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_yonath, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 10:22:47 compute-0 systemd[1]: libpod-conmon-2f6b72e480557b70947c111eb2758adf2f60866fc54561d0a774f24553e916f0.scope: Deactivated successfully.
Dec 06 10:22:47 compute-0 podman[292524]: 2025-12-06 10:22:47.52972379 +0000 UTC m=+0.035556140 container create 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:47 compute-0 ceph-mon[74327]: pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:47 compute-0 systemd[1]: Started libpod-conmon-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope.
Dec 06 10:22:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:47 compute-0 podman[292524]: 2025-12-06 10:22:47.597615199 +0000 UTC m=+0.103447549 container init 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:22:47 compute-0 podman[292524]: 2025-12-06 10:22:47.608895816 +0000 UTC m=+0.114728166 container start 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:22:47 compute-0 podman[292524]: 2025-12-06 10:22:47.514362901 +0000 UTC m=+0.020195271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:22:47 compute-0 podman[292524]: 2025-12-06 10:22:47.611732913 +0000 UTC m=+0.117565263 container attach 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 06 10:22:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:47.718Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:48 compute-0 lvm[292617]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:22:48 compute-0 lvm[292617]: VG ceph_vg0 finished
Dec 06 10:22:48 compute-0 beautiful_mahavira[292541]: {}
Dec 06 10:22:48 compute-0 systemd[1]: libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Deactivated successfully.
Dec 06 10:22:48 compute-0 systemd[1]: libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Consumed 1.131s CPU time.
Dec 06 10:22:48 compute-0 conmon[292541]: conmon 806b83a3b5f4ca871133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope/container/memory.events
Dec 06 10:22:48 compute-0 podman[292524]: 2025-12-06 10:22:48.299734321 +0000 UTC m=+0.805566671 container died 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 10:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b37fa194333a646538adb5def8f0391b2f4b32df7024cfed355743dfcf4b4d92-merged.mount: Deactivated successfully.
Dec 06 10:22:48 compute-0 podman[292524]: 2025-12-06 10:22:48.334222801 +0000 UTC m=+0.840055151 container remove 806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_mahavira, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:22:48 compute-0 systemd[1]: libpod-conmon-806b83a3b5f4ca871133f0df1ce2fb08524150fa07ec3c17608f42a4595c2cc6.scope: Deactivated successfully.
Dec 06 10:22:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:48.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:48 compute-0 sudo[292419]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:22:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:48 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:22:48 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:48 compute-0 sudo[292635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:22:48 compute-0 sudo[292635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:48 compute-0 sudo[292635]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:48 compute-0 ceph-mon[74327]: pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:48 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:48 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:22:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:48.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:49 compute-0 nova_compute[254819]: 2025-12-06 10:22:49.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:22:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:22:50 compute-0 nova_compute[254819]: 2025-12-06 10:22:50.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:22:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:22:50] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:22:50 compute-0 sshd-session[292660]: Invalid user ubuntu from 43.163.93.82 port 60272
Dec 06 10:22:51 compute-0 ceph-mon[74327]: pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:51 compute-0 sshd-session[292660]: Received disconnect from 43.163.93.82 port 60272:11:  [preauth]
Dec 06 10:22:51 compute-0 sshd-session[292660]: Disconnected from invalid user ubuntu 43.163.93.82 port 60272 [preauth]
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.781 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:22:51 compute-0 nova_compute[254819]: 2025-12-06 10:22:51.782 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:22:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.166 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:52 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:22:52 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784602875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.236 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:22:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.452 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.453 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4425MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.453 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.454 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.535 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.536 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:22:52 compute-0 nova_compute[254819]: 2025-12-06 10:22:52.558 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:22:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:52.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:22:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605113535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:53 compute-0 nova_compute[254819]: 2025-12-06 10:22:53.026 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:22:53 compute-0 nova_compute[254819]: 2025-12-06 10:22:53.034 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:22:53 compute-0 nova_compute[254819]: 2025-12-06 10:22:53.056 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:22:53 compute-0 nova_compute[254819]: 2025-12-06 10:22:53.059 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:22:53 compute-0 nova_compute[254819]: 2025-12-06 10:22:53.060 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:53 compute-0 ceph-mon[74327]: pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2784602875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:53 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/605113535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:22:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1314528910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:22:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.250 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:22:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:54.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:54 compute-0 nova_compute[254819]: 2025-12-06 10:22:54.649 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:55 compute-0 ceph-mon[74327]: pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3977870626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:22:55 compute-0 podman[292712]: 2025-12-06 10:22:55.448652206 +0000 UTC m=+0.083971299 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.061 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.062 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.062 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:56 compute-0 ceph-mon[74327]: pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:22:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:22:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:22:56 compute-0 nova_compute[254819]: 2025-12-06 10:22:56.771 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:22:57 compute-0 sudo[292735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:22:57 compute-0 sudo[292735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:22:57 compute-0 sudo[292735]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:57 compute-0 nova_compute[254819]: 2025-12-06 10:22:57.168 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:22:57.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:22:57 compute-0 nova_compute[254819]: 2025-12-06 10:22:57.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:22:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:22:58.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:22:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:22:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:22:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:22:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:22:58 compute-0 nova_compute[254819]: 2025-12-06 10:22:58.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:58 compute-0 nova_compute[254819]: 2025-12-06 10:22:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:58 compute-0 nova_compute[254819]: 2025-12-06 10:22:58.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:22:59 compute-0 ceph-mon[74327]: pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:22:59 compute-0 nova_compute[254819]: 2025-12-06 10:22:59.651 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:59 compute-0 nova_compute[254819]: 2025-12-06 10:22:59.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:00 compute-0 podman[292764]: 2025-12-06 10:23:00.507555748 +0000 UTC m=+0.126044314 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:23:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:23:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Dec 06 10:23:01 compute-0 ceph-mon[74327]: pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:02 compute-0 nova_compute[254819]: 2025-12-06 10:23:02.180 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:02 compute-0 ceph-mon[74327]: pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:03 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2126492916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:23:03 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1293102009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:04 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1293102009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:04 compute-0 ceph-mon[74327]: pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:04.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:04 compute-0 nova_compute[254819]: 2025-12-06 10:23:04.700 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:05 compute-0 podman[292794]: 2025-12-06 10:23:05.430724333 +0000 UTC m=+0.064858057 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:23:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:07 compute-0 ceph-mon[74327]: pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:07 compute-0 nova_compute[254819]: 2025-12-06 10:23:07.182 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:07.722Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:08.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:23:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:09 compute-0 ceph-mon[74327]: pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:09 compute-0 nova_compute[254819]: 2025-12-06 10:23:09.703 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:10 compute-0 ceph-mon[74327]: pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:10.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:10.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:23:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:23:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:12 compute-0 nova_compute[254819]: 2025-12-06 10:23:12.210 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:12.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:13 compute-0 ceph-mon[74327]: pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:14.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:14 compute-0 nova_compute[254819]: 2025-12-06 10:23:14.751 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:15 compute-0 ceph-mon[74327]: pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:16.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:17 compute-0 sudo[292825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:23:17 compute-0 sudo[292825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:17 compute-0 sudo[292825]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:17 compute-0 ceph-mon[74327]: pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:17 compute-0 nova_compute[254819]: 2025-12-06 10:23:17.250 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:17.723Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:18 compute-0 ceph-mon[74327]: pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:18.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:18.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:19 compute-0 nova_compute[254819]: 2025-12-06 10:23:19.803 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:20.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:23:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Dec 06 10:23:21 compute-0 ceph-mon[74327]: pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:22 compute-0 nova_compute[254819]: 2025-12-06 10:23:22.315 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:23:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:23:22 compute-0 ceph-mon[74327]: pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:23:23
Dec 06 10:23:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:23:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:23:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Dec 06 10:23:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:23:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:23:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:23:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:23:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:24.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:24 compute-0 nova_compute[254819]: 2025-12-06 10:23:24.806 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:25 compute-0 ceph-mon[74327]: pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:26.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:26 compute-0 podman[292860]: 2025-12-06 10:23:26.432319831 +0000 UTC m=+0.062987307 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:23:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:27 compute-0 ceph-mon[74327]: pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:27 compute-0 nova_compute[254819]: 2025-12-06 10:23:27.318 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:27.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:27 compute-0 ceph-mgr[74618]: [devicehealth INFO root] Check health
Dec 06 10:23:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:28.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:29 compute-0 ceph-mon[74327]: pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:29 compute-0 nova_compute[254819]: 2025-12-06 10:23:29.841 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:30 compute-0 ceph-mon[74327]: pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:30.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:23:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:23:31 compute-0 podman[292884]: 2025-12-06 10:23:31.490296597 +0000 UTC m=+0.113856842 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:23:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:32 compute-0 nova_compute[254819]: 2025-12-06 10:23:32.320 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:33 compute-0 ceph-mon[74327]: pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:34 compute-0 nova_compute[254819]: 2025-12-06 10:23:34.878 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:35 compute-0 ceph-mon[74327]: pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:36 compute-0 ceph-mon[74327]: pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:36 compute-0 podman[292916]: 2025-12-06 10:23:36.423371624 +0000 UTC m=+0.055605636 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:23:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:37 compute-0 sudo[292935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:23:37 compute-0 sudo[292935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:37 compute-0 sudo[292935]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:37 compute-0 nova_compute[254819]: 2025-12-06 10:23:37.375 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:37.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:38.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:23:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:39 compute-0 ceph-mon[74327]: pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:39 compute-0 nova_compute[254819]: 2025-12-06 10:23:39.881 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:23:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:23:41 compute-0 ceph-mon[74327]: pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:42 compute-0 ceph-mon[74327]: pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:42 compute-0 nova_compute[254819]: 2025-12-06 10:23:42.409 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:44 compute-0 nova_compute[254819]: 2025-12-06 10:23:44.921 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:45 compute-0 ceph-mon[74327]: pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:23:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:23:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:23:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:23:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:23:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/1770852714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:23:47 compute-0 nova_compute[254819]: 2025-12-06 10:23:47.452 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:47.727Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:47 compute-0 ceph-mon[74327]: pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:48 compute-0 sudo[292972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:23:48 compute-0 sudo[292972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:48 compute-0 sudo[292972]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:48 compute-0 sudo[292997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:23:48 compute-0 sudo[292997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:48 compute-0 ceph-mon[74327]: pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:49 compute-0 sudo[292997]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:23:49 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:23:49 compute-0 sudo[293052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:23:49 compute-0 sudo[293052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:49 compute-0 sudo[293052]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:49 compute-0 sudo[293077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:23:49 compute-0 sudo[293077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:49 compute-0 nova_compute[254819]: 2025-12-06 10:23:49.925 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:23:49 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.058972195 +0000 UTC m=+0.039440545 container create e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:23:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.079657) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630079694, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1539, "num_deletes": 250, "total_data_size": 2795645, "memory_usage": 2853456, "flush_reason": "Manual Compaction"}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630095998, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2747624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34792, "largest_seqno": 36330, "table_properties": {"data_size": 2740571, "index_size": 4060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13926, "raw_average_key_size": 18, "raw_value_size": 2726508, "raw_average_value_size": 3669, "num_data_blocks": 178, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016481, "oldest_key_time": 1765016481, "file_creation_time": 1765016630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16380 microseconds, and 5194 cpu microseconds.
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:23:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:50 compute-0 systemd[1]: Started libpod-conmon-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope.
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.096038) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2747624 bytes OK
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.096056) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103668) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103683) EVENT_LOG_v1 {"time_micros": 1765016630103678, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.103698) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2789098, prev total WAL file size 2789098, number of live WAL files 2.
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.104674) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2683KB)], [74(12MB)]
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630104705, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15892564, "oldest_snapshot_seqno": -1}
Dec 06 10:23:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.042249181 +0000 UTC m=+0.022717541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6850 keys, 14489121 bytes, temperature: kUnknown
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630227653, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 14489121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14444812, "index_size": 26085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178738, "raw_average_key_size": 26, "raw_value_size": 14322951, "raw_average_value_size": 2090, "num_data_blocks": 1032, "num_entries": 6850, "num_filter_entries": 6850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.228126) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 14489121 bytes
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.230166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 117.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 12.5 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(11.1) write-amplify(5.3) OK, records in: 7364, records dropped: 514 output_compression: NoCompression
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.230197) EVENT_LOG_v1 {"time_micros": 1765016630230183, "job": 42, "event": "compaction_finished", "compaction_time_micros": 123061, "compaction_time_cpu_micros": 29297, "output_level": 6, "num_output_files": 1, "total_output_size": 14489121, "num_input_records": 7364, "num_output_records": 6850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630231195, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.231518165 +0000 UTC m=+0.211986605 container init e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016630235913, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.104599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:23:50.236020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.238213747 +0000 UTC m=+0.218682097 container start e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.241076616 +0000 UTC m=+0.221545056 container attach e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:23:50 compute-0 adoring_davinci[293161]: 167 167
Dec 06 10:23:50 compute-0 systemd[1]: libpod-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope: Deactivated successfully.
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.246967216 +0000 UTC m=+0.227435606 container died e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d419014c9c7b5aa1e3b96c6211e17983c8f09231652b463e8943b3c8431531-merged.mount: Deactivated successfully.
Dec 06 10:23:50 compute-0 podman[293145]: 2025-12-06 10:23:50.299892207 +0000 UTC m=+0.280360567 container remove e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_davinci, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:23:50 compute-0 systemd[1]: libpod-conmon-e343b25fba4fdd2c73413a5d577be410d1586ebff4e61d0e599e085da54a91bf.scope: Deactivated successfully.
Dec 06 10:23:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:50.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.479433688 +0000 UTC m=+0.050809886 container create 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:23:50 compute-0 systemd[1]: Started libpod-conmon-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope.
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.463962446 +0000 UTC m=+0.035338664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.591576562 +0000 UTC m=+0.162952780 container init 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.604711429 +0000 UTC m=+0.176087667 container start 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.611754891 +0000 UTC m=+0.183131099 container attach 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 10:23:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:23:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:23:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:23:50 compute-0 inspiring_panini[293203]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:23:50 compute-0 inspiring_panini[293203]: --> All data devices are unavailable
Dec 06 10:23:50 compute-0 systemd[1]: libpod-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope: Deactivated successfully.
Dec 06 10:23:50 compute-0 podman[293187]: 2025-12-06 10:23:50.971344314 +0000 UTC m=+0.542720552 container died 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 06 10:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-584bc8de0e3dcc1fc56fe09be1c06172292c914286c9db79ffc8cd90ca66aa43-merged.mount: Deactivated successfully.
Dec 06 10:23:51 compute-0 podman[293187]: 2025-12-06 10:23:51.036361425 +0000 UTC m=+0.607737623 container remove 55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:23:51 compute-0 systemd[1]: libpod-conmon-55593bf8c79440a821f12b970f00b532da7b1c110e77d2c36618a677f3441342.scope: Deactivated successfully.
Dec 06 10:23:51 compute-0 sudo[293077]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:51 compute-0 ceph-mon[74327]: pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:51 compute-0 sudo[293229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:23:51 compute-0 sudo[293229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:51 compute-0 sudo[293229]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:51 compute-0 sudo[293254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:23:51 compute-0 sudo[293254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:51 compute-0 podman[293319]: 2025-12-06 10:23:51.607784858 +0000 UTC m=+0.039326402 container create b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 10:23:51 compute-0 systemd[1]: Started libpod-conmon-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope.
Dec 06 10:23:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:51 compute-0 podman[293319]: 2025-12-06 10:23:51.590862767 +0000 UTC m=+0.022404341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:51 compute-0 podman[293319]: 2025-12-06 10:23:51.686649337 +0000 UTC m=+0.118190901 container init b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 06 10:23:51 compute-0 podman[293319]: 2025-12-06 10:23:51.694271484 +0000 UTC m=+0.125813028 container start b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:23:51 compute-0 podman[293319]: 2025-12-06 10:23:51.697672736 +0000 UTC m=+0.129214280 container attach b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:23:51 compute-0 exciting_northcutt[293335]: 167 167
Dec 06 10:23:51 compute-0 systemd[1]: libpod-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope: Deactivated successfully.
Dec 06 10:23:51 compute-0 nova_compute[254819]: 2025-12-06 10:23:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:51 compute-0 podman[293342]: 2025-12-06 10:23:51.754962347 +0000 UTC m=+0.036416893 container died b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 10:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a707ff5d741dd0bd73e723b37d97bd3b82f62d64fe69bbc22d43ead87f476c9a-merged.mount: Deactivated successfully.
Dec 06 10:23:51 compute-0 podman[293342]: 2025-12-06 10:23:51.790836634 +0000 UTC m=+0.072291170 container remove b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:23:51 compute-0 systemd[1]: libpod-conmon-b3e25128a71f96d42ff2f0863233ea2f63537def444cf2f5d876a25fa800e451.scope: Deactivated successfully.
Dec 06 10:23:51 compute-0 podman[293364]: 2025-12-06 10:23:51.98054763 +0000 UTC m=+0.041310305 container create 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec 06 10:23:52 compute-0 systemd[1]: Started libpod-conmon-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope.
Dec 06 10:23:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:51.962932501 +0000 UTC m=+0.023695186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:52.068035394 +0000 UTC m=+0.128798109 container init 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:52.079912447 +0000 UTC m=+0.140675132 container start 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:52.083644648 +0000 UTC m=+0.144407323 container attach 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:23:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:52 compute-0 boring_euclid[293380]: {
Dec 06 10:23:52 compute-0 boring_euclid[293380]:     "1": [
Dec 06 10:23:52 compute-0 boring_euclid[293380]:         {
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "devices": [
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "/dev/loop3"
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             ],
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "lv_name": "ceph_lv0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "lv_size": "21470642176",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "name": "ceph_lv0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "tags": {
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.cluster_name": "ceph",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.crush_device_class": "",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.encrypted": "0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.osd_id": "1",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.type": "block",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.vdo": "0",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:                 "ceph.with_tpm": "0"
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             },
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "type": "block",
Dec 06 10:23:52 compute-0 boring_euclid[293380]:             "vg_name": "ceph_vg0"
Dec 06 10:23:52 compute-0 boring_euclid[293380]:         }
Dec 06 10:23:52 compute-0 boring_euclid[293380]:     ]
Dec 06 10:23:52 compute-0 boring_euclid[293380]: }
Dec 06 10:23:52 compute-0 systemd[1]: libpod-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope: Deactivated successfully.
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:52.371099648 +0000 UTC m=+0.431862313 container died 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f27b8828d6561b5a3aba794ecefe2473c19c22c55f8b80f596cef6722791d9-merged.mount: Deactivated successfully.
Dec 06 10:23:52 compute-0 podman[293364]: 2025-12-06 10:23:52.41893307 +0000 UTC m=+0.479695735 container remove 16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_euclid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 10:23:52 compute-0 systemd[1]: libpod-conmon-16ed3fa523ab72ab24ef8211863776b2a9214f6654003e9803348ea63503942e.scope: Deactivated successfully.
Dec 06 10:23:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:52.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:52 compute-0 sudo[293254]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.456 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:52 compute-0 sudo[293401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:23:52 compute-0 sudo[293401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:52 compute-0 sudo[293401]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:52 compute-0 sudo[293426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:23:52 compute-0 sudo[293426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.779 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:23:52 compute-0 nova_compute[254819]: 2025-12-06 10:23:52.780 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.022411636 +0000 UTC m=+0.053970300 container create 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:23:53 compute-0 systemd[1]: Started libpod-conmon-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope.
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:52.99241057 +0000 UTC m=+0.023969274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.10771122 +0000 UTC m=+0.139269884 container init 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.114514535 +0000 UTC m=+0.146073149 container start 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.118786572 +0000 UTC m=+0.150345286 container attach 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:23:53 compute-0 systemd[1]: libpod-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope: Deactivated successfully.
Dec 06 10:23:53 compute-0 condescending_ardinghelli[293528]: 167 167
Dec 06 10:23:53 compute-0 conmon[293528]: conmon 9ac7fd11fd123aa5a98f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope/container/memory.events
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.122123282 +0000 UTC m=+0.153681926 container died 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-580e7b07db2c4681058c9940fa9c8dfbb2e02452fcfc5ca3a316ca7fbfa3e689-merged.mount: Deactivated successfully.
Dec 06 10:23:53 compute-0 podman[293512]: 2025-12-06 10:23:53.161158335 +0000 UTC m=+0.192716959 container remove 9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 10:23:53 compute-0 ceph-mon[74327]: pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:53 compute-0 systemd[1]: libpod-conmon-9ac7fd11fd123aa5a98ffb28a7c5fc6b9495328d00a6707e8bbfa9f04a4ced26.scope: Deactivated successfully.
Dec 06 10:23:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:23:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252620075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.237 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:23:53 compute-0 podman[293552]: 2025-12-06 10:23:53.412736298 +0000 UTC m=+0.057947930 container create 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.435 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.437 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4445MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:53 compute-0 systemd[1]: Started libpod-conmon-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope.
Dec 06 10:23:53 compute-0 podman[293552]: 2025-12-06 10:23:53.387086559 +0000 UTC m=+0.032298231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:23:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:53 compute-0 podman[293552]: 2025-12-06 10:23:53.507702044 +0000 UTC m=+0.152913706 container init 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.511 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.512 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:23:53 compute-0 podman[293552]: 2025-12-06 10:23:53.516304088 +0000 UTC m=+0.161515720 container start 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:23:53 compute-0 podman[293552]: 2025-12-06 10:23:53.519752422 +0000 UTC m=+0.164964044 container attach 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.529 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:23:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:23:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964484987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.972 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.978 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:23:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:23:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.998 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.999 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:23:53 compute-0 nova_compute[254819]: 2025-12-06 10:23:53.999 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:54 compute-0 lvm[293666]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:23:54 compute-0 lvm[293666]: VG ceph_vg0 finished
Dec 06 10:23:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:54 compute-0 determined_solomon[293568]: {}
Dec 06 10:23:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1252620075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:54 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3964484987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:23:54 compute-0 systemd[1]: libpod-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope: Deactivated successfully.
Dec 06 10:23:54 compute-0 podman[293669]: 2025-12-06 10:23:54.234361935 +0000 UTC m=+0.027063949 container died 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 10:23:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:23:54.253 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9637a56520efda796f0016ab1fffa37434efd649b203d37dadb9013ce382367-merged.mount: Deactivated successfully.
Dec 06 10:23:54 compute-0 podman[293669]: 2025-12-06 10:23:54.276075711 +0000 UTC m=+0.068777705 container remove 111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 06 10:23:54 compute-0 systemd[1]: libpod-conmon-111c2187096c434c949e9983f1f83179c1244d5eaf995c2ed5044d0b9b808d7e.scope: Deactivated successfully.
Dec 06 10:23:54 compute-0 sudo[293426]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:23:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:23:54 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:54 compute-0 sudo[293684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:23:54 compute-0 sudo[293684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:54 compute-0 sudo[293684]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:54 compute-0 nova_compute[254819]: 2025-12-06 10:23:54.929 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:55 compute-0 ceph-mon[74327]: pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:55 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:23:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3457887194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1175837351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:23:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:23:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:23:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:57 compute-0 nova_compute[254819]: 2025-12-06 10:23:57.000 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:57 compute-0 nova_compute[254819]: 2025-12-06 10:23:57.001 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:57 compute-0 ceph-mon[74327]: pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:23:57 compute-0 sudo[293711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:23:57 compute-0 sudo[293711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:23:57 compute-0 sudo[293711]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:57 compute-0 nova_compute[254819]: 2025-12-06 10:23:57.494 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:57 compute-0 podman[293734]: 2025-12-06 10:23:57.498257268 +0000 UTC m=+0.111950140 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:23:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:23:57.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:23:57 compute-0 nova_compute[254819]: 2025-12-06 10:23:57.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:58 compute-0 ceph-mon[74327]: pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:23:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:23:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:23:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:23:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:23:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:23:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:23:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.766 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.767 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:58 compute-0 nova_compute[254819]: 2025-12-06 10:23:58.767 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:23:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=cleanup t=2025-12-06T10:23:59.60479229Z level=info msg="Completed cleanup jobs" duration=41.225113ms
Dec 06 10:23:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=grafana.update.checker t=2025-12-06T10:23:59.724261084Z level=info msg="Update check succeeded" duration=58.671018ms
Dec 06 10:23:59 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0[105048]: logger=plugins.update.checker t=2025-12-06T10:23:59.726778553Z level=info msg="Update check succeeded" duration=99.954433ms
Dec 06 10:23:59 compute-0 nova_compute[254819]: 2025-12-06 10:23:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:59 compute-0 nova_compute[254819]: 2025-12-06 10:23:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:59 compute-0 nova_compute[254819]: 2025-12-06 10:23:59.933 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:01 compute-0 ceph-mon[74327]: pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:02 compute-0 podman[293762]: 2025-12-06 10:24:02.451627256 +0000 UTC m=+0.086099816 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 06 10:24:02 compute-0 nova_compute[254819]: 2025-12-06 10:24:02.497 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:02.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:03 compute-0 ceph-mon[74327]: pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:04 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/747380932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:04.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:04 compute-0 nova_compute[254819]: 2025-12-06 10:24:04.974 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:05 compute-0 ceph-mon[74327]: pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/962126688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:06 compute-0 ceph-mon[74327]: pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:07 compute-0 podman[293792]: 2025-12-06 10:24:07.456090056 +0000 UTC m=+0.085054158 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:24:07 compute-0 nova_compute[254819]: 2025-12-06 10:24:07.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:07.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:24:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:24:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:09 compute-0 ceph-mon[74327]: pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:09 compute-0 nova_compute[254819]: 2025-12-06 10:24:09.979 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:10 compute-0 ceph-mon[74327]: pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:12 compute-0 nova_compute[254819]: 2025-12-06 10:24:12.500 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:13 compute-0 ceph-mon[74327]: pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:14 compute-0 ceph-mon[74327]: pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:14 compute-0 nova_compute[254819]: 2025-12-06 10:24:14.983 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:24:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:24:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:16.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:17 compute-0 ceph-mon[74327]: pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:17 compute-0 sudo[293822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:24:17 compute-0 sudo[293822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:17 compute-0 sudo[293822]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:17 compute-0 nova_compute[254819]: 2025-12-06 10:24:17.549 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:17.732Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:24:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:17.733Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:24:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:18.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:19 compute-0 ceph-mon[74327]: pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:19 compute-0 nova_compute[254819]: 2025-12-06 10:24:19.987 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:20 compute-0 ceph-mon[74327]: pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:20.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:24:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:22.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:22 compute-0 nova_compute[254819]: 2025-12-06 10:24:22.552 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:23 compute-0 ceph-mon[74327]: pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:24:23
Dec 06 10:24:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:24:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:24:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', '.nfs', 'volumes', 'default.rgw.control', 'images']
Dec 06 10:24:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:24:23 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:24:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:24:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:24:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:24 compute-0 nova_compute[254819]: 2025-12-06 10:24:24.991 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:25 compute-0 ceph-mon[74327]: pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:26 compute-0 ceph-mon[74327]: pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:27 compute-0 nova_compute[254819]: 2025-12-06 10:24:27.553 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.733Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:24:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.733Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:24:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:27.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:24:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:28 compute-0 podman[293859]: 2025-12-06 10:24:28.461228853 +0000 UTC m=+0.078995232 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:24:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:29 compute-0 ceph-mon[74327]: pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:29 compute-0 nova_compute[254819]: 2025-12-06 10:24:29.993 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:30 compute-0 ceph-mon[74327]: pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:30.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:30] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:24:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:30] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:24:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:32 compute-0 nova_compute[254819]: 2025-12-06 10:24:32.555 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:33 compute-0 ceph-mon[74327]: pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:33 compute-0 podman[293883]: 2025-12-06 10:24:33.52613076 +0000 UTC m=+0.147335244 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:24:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:34 compute-0 ceph-mon[74327]: pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:34 compute-0 nova_compute[254819]: 2025-12-06 10:24:34.995 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:37 compute-0 ceph-mon[74327]: pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:37 compute-0 sudo[293914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:24:37 compute-0 sudo[293914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:37 compute-0 sudo[293914]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:37 compute-0 nova_compute[254819]: 2025-12-06 10:24:37.557 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:37 compute-0 podman[293938]: 2025-12-06 10:24:37.619818092 +0000 UTC m=+0.063909141 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:24:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:37.735Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:24:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:38 compute-0 ceph-mon[74327]: pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:24:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:39 compute-0 nova_compute[254819]: 2025-12-06 10:24:39.997 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:40 compute-0 ceph-mon[74327]: pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:40.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:24:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:24:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:24:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:24:42 compute-0 nova_compute[254819]: 2025-12-06 10:24:42.560 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:43 compute-0 ceph-mon[74327]: pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.204461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683204540, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 1026848, "memory_usage": 1041304, "flush_reason": "Manual Compaction"}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683216878, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1017152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36331, "largest_seqno": 37030, "table_properties": {"data_size": 1013510, "index_size": 1486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8250, "raw_average_key_size": 19, "raw_value_size": 1006253, "raw_average_value_size": 2362, "num_data_blocks": 65, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016631, "oldest_key_time": 1765016631, "file_creation_time": 1765016683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12458 microseconds, and 5661 cpu microseconds.
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.216942) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1017152 bytes OK
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.216974) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218369) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218393) EVENT_LOG_v1 {"time_micros": 1765016683218385, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.218416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1023281, prev total WAL file size 1023281, number of live WAL files 2.
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.219429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(993KB)], [77(13MB)]
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683219567, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15506273, "oldest_snapshot_seqno": -1}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6762 keys, 13325396 bytes, temperature: kUnknown
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683326336, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13325396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13282783, "index_size": 24581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 177577, "raw_average_key_size": 26, "raw_value_size": 13163509, "raw_average_value_size": 1946, "num_data_blocks": 963, "num_entries": 6762, "num_filter_entries": 6762, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.326666) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13325396 bytes
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.327643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 124.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(28.3) write-amplify(13.1) OK, records in: 7276, records dropped: 514 output_compression: NoCompression
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.327665) EVENT_LOG_v1 {"time_micros": 1765016683327655, "job": 44, "event": "compaction_finished", "compaction_time_micros": 106847, "compaction_time_cpu_micros": 47748, "output_level": 6, "num_output_files": 1, "total_output_size": 13325396, "num_input_records": 7276, "num_output_records": 6762, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683328020, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016683331639, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.219244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:43 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:24:43.331769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:44 compute-0 ceph-mon[74327]: pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:44.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:45 compute-0 nova_compute[254819]: 2025-12-06 10:24:45.001 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/120415646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:24:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/120415646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:24:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:46.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:47 compute-0 ceph-mon[74327]: pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:47 compute-0 nova_compute[254819]: 2025-12-06 10:24:47.563 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:47.736Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:24:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:47.736Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:24:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:48 compute-0 ceph-mon[74327]: pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:50 compute-0 nova_compute[254819]: 2025-12-06 10:24:50.063 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:50.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:24:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:24:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:24:51 compute-0 ceph-mon[74327]: pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:51 compute-0 nova_compute[254819]: 2025-12-06 10:24:51.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:52 compute-0 ceph-mon[74327]: pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:52 compute-0 nova_compute[254819]: 2025-12-06 10:24:52.565 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:52.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.784 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.784 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:24:53 compute-0 nova_compute[254819]: 2025-12-06 10:24:53.785 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:24:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:54 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:24:54 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597878080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.243 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.251 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:24:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.435 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.437 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4497MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.437 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.438 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.525 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.525 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:24:54 compute-0 sudo[293998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:54 compute-0 sudo[293998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:54 compute-0 sudo[293998]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:54 compute-0 nova_compute[254819]: 2025-12-06 10:24:54.697 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:54 compute-0 sudo[294023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:24:54 compute-0 sudo[294023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:55 compute-0 ceph-mon[74327]: pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3597878080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.066 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258271433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.226 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.233 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.249 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.251 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:24:55 compute-0 nova_compute[254819]: 2025-12-06 10:24:55.251 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:55 compute-0 sudo[294023]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:24:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:24:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:24:55 compute-0 sudo[294100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:55 compute-0 sudo[294100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:55 compute-0 sudo[294100]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:55 compute-0 sudo[294125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:24:55 compute-0 sudo[294125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:55 compute-0 podman[294191]: 2025-12-06 10:24:55.937628568 +0000 UTC m=+0.053603791 container create 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec 06 10:24:55 compute-0 systemd[1]: Started libpod-conmon-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope.
Dec 06 10:24:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:55.920201533 +0000 UTC m=+0.036176776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:56.021751809 +0000 UTC m=+0.137727072 container init 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:56.032349038 +0000 UTC m=+0.148324301 container start 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:56.036424288 +0000 UTC m=+0.152399541 container attach 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:24:56 compute-0 bold_austin[294207]: 167 167
Dec 06 10:24:56 compute-0 systemd[1]: libpod-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope: Deactivated successfully.
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:56.042623507 +0000 UTC m=+0.158598730 container died 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 06 10:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df519bce77be089ab7fca6d7690e505db477ad4562bb7d76608657c34d9785f-merged.mount: Deactivated successfully.
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2258271433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:24:56 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:24:56 compute-0 podman[294191]: 2025-12-06 10:24:56.092575778 +0000 UTC m=+0.208551001 container remove 8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_austin, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 10:24:56 compute-0 systemd[1]: libpod-conmon-8e978c7043effcd60b9b4afff7d0f9e1d64b73ba2b60d0cbd6bcf7b8236f2727.scope: Deactivated successfully.
Dec 06 10:24:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.324974377 +0000 UTC m=+0.063944522 container create 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 06 10:24:56 compute-0 systemd[1]: Started libpod-conmon-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope.
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.28687398 +0000 UTC m=+0.025844175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.439091826 +0000 UTC m=+0.178061961 container init 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.451835003 +0000 UTC m=+0.190805108 container start 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.455205725 +0000 UTC m=+0.194175930 container attach 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:24:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:56.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:56 compute-0 kind_cannon[294249]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:24:56 compute-0 kind_cannon[294249]: --> All data devices are unavailable
Dec 06 10:24:56 compute-0 systemd[1]: libpod-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope: Deactivated successfully.
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.903601697 +0000 UTC m=+0.642571882 container died 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e97329a1c674852b364c716cf0c507e21b7edc1a24ed66abb1399b85a651e9a-merged.mount: Deactivated successfully.
Dec 06 10:24:56 compute-0 podman[294232]: 2025-12-06 10:24:56.957084273 +0000 UTC m=+0.696054388 container remove 24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:24:56 compute-0 systemd[1]: libpod-conmon-24639ad1796cf687c26f824cab1a512c910c55558a1cb92a6dc0665965025f58.scope: Deactivated successfully.
Dec 06 10:24:57 compute-0 sudo[294125]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:57 compute-0 ceph-mon[74327]: pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:24:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2650253251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:57 compute-0 sudo[294275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:57 compute-0 sudo[294275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:57 compute-0 sudo[294275]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:57 compute-0 sudo[294300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:24:57 compute-0 sudo[294300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:57 compute-0 nova_compute[254819]: 2025-12-06 10:24:57.566 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:57 compute-0 sudo[294365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:24:57 compute-0 sudo[294365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:57 compute-0 sudo[294365]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.657326094 +0000 UTC m=+0.072498575 container create c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:24:57 compute-0 systemd[1]: Started libpod-conmon-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope.
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.628662304 +0000 UTC m=+0.043834835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:57.737Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:24:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:24:57.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:24:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.766090467 +0000 UTC m=+0.181262938 container init c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.779683457 +0000 UTC m=+0.194855898 container start c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Dec 06 10:24:57 compute-0 blissful_robinson[294407]: 167 167
Dec 06 10:24:57 compute-0 systemd[1]: libpod-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope: Deactivated successfully.
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.934185125 +0000 UTC m=+0.349357616 container attach c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.935370977 +0000 UTC m=+0.350543468 container died c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e05bbeb31b67b3050e67cbbd47e5fcc88faf365476d6d76aabc0959b2479c63-merged.mount: Deactivated successfully.
Dec 06 10:24:57 compute-0 podman[294368]: 2025-12-06 10:24:57.98835237 +0000 UTC m=+0.403524851 container remove c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:24:57 compute-0 systemd[1]: libpod-conmon-c2b0d1d0906aef800811627f027b573eb8ff10d660d779223fde48d4501074e8.scope: Deactivated successfully.
Dec 06 10:24:58 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/796350809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:24:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.18036411 +0000 UTC m=+0.057824966 container create 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:24:58 compute-0 systemd[1]: Started libpod-conmon-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope.
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.153247181 +0000 UTC m=+0.030708117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:58 compute-0 nova_compute[254819]: 2025-12-06 10:24:58.252 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:58 compute-0 nova_compute[254819]: 2025-12-06 10:24:58.253 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.307139382 +0000 UTC m=+0.184600308 container init 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.314958275 +0000 UTC m=+0.192419131 container start 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.319039096 +0000 UTC m=+0.196499992 container attach 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 06 10:24:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:24:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:24:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]: {
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:     "1": [
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:         {
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "devices": [
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "/dev/loop3"
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             ],
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "lv_name": "ceph_lv0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "lv_size": "21470642176",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "name": "ceph_lv0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "tags": {
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.cluster_name": "ceph",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.crush_device_class": "",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.encrypted": "0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.osd_id": "1",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.type": "block",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.vdo": "0",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:                 "ceph.with_tpm": "0"
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             },
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "type": "block",
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:             "vg_name": "ceph_vg0"
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:         }
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]:     ]
Dec 06 10:24:58 compute-0 blissful_agnesi[294448]: }
Dec 06 10:24:58 compute-0 systemd[1]: libpod-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope: Deactivated successfully.
Dec 06 10:24:58 compute-0 conmon[294448]: conmon 5b5469f23b54a89be8dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope/container/memory.events
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.684026696 +0000 UTC m=+0.561487592 container died 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-836bc34eebdf9759a7937a64b8401c180d97caf725b94f3e535998cf0eaf9d23-merged.mount: Deactivated successfully.
Dec 06 10:24:58 compute-0 podman[294432]: 2025-12-06 10:24:58.73334047 +0000 UTC m=+0.610801326 container remove 5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 06 10:24:58 compute-0 systemd[1]: libpod-conmon-5b5469f23b54a89be8dc0836d210e361da08f411e06a3a07a5b9968485581801.scope: Deactivated successfully.
Dec 06 10:24:58 compute-0 nova_compute[254819]: 2025-12-06 10:24:58.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:58 compute-0 sudo[294300]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:58 compute-0 podman[294459]: 2025-12-06 10:24:58.8064114 +0000 UTC m=+0.076242038 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:24:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:24:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:24:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:24:58.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:24:58 compute-0 sudo[294489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:58 compute-0 sudo[294489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:58 compute-0 sudo[294489]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:58 compute-0 sudo[294517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:24:58 compute-0 sudo[294517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:24:59 compute-0 ceph-mon[74327]: pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.381990826 +0000 UTC m=+0.050152156 container create d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:24:59 compute-0 systemd[1]: Started libpod-conmon-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope.
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.361702843 +0000 UTC m=+0.029864213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.477080715 +0000 UTC m=+0.145242075 container init d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.485872516 +0000 UTC m=+0.154033846 container start d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.489128924 +0000 UTC m=+0.157290444 container attach d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:24:59 compute-0 busy_jennings[294598]: 167 167
Dec 06 10:24:59 compute-0 systemd[1]: libpod-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope: Deactivated successfully.
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.493226656 +0000 UTC m=+0.161387986 container died d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0e9b617b46a06c85415f6d931d952aba099e235072375fa63f06ad1c2ca3c18-merged.mount: Deactivated successfully.
Dec 06 10:24:59 compute-0 podman[294582]: 2025-12-06 10:24:59.5275412 +0000 UTC m=+0.195702530 container remove d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 06 10:24:59 compute-0 systemd[1]: libpod-conmon-d43c1f871e0e37b5a980d1302087291eca4f7bd93c18db2b42e5737047d772f1.scope: Deactivated successfully.
Dec 06 10:24:59 compute-0 nova_compute[254819]: 2025-12-06 10:24:59.743 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:59 compute-0 podman[294622]: 2025-12-06 10:24:59.708677944 +0000 UTC m=+0.040256007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:24:59 compute-0 podman[294622]: 2025-12-06 10:24:59.899804479 +0000 UTC m=+0.231382522 container create 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:24:59 compute-0 systemd[1]: Started libpod-conmon-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope.
Dec 06 10:24:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:59 compute-0 podman[294622]: 2025-12-06 10:24:59.994078547 +0000 UTC m=+0.325656610 container init 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Dec 06 10:25:00 compute-0 podman[294622]: 2025-12-06 10:25:00.000672676 +0000 UTC m=+0.332250729 container start 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:25:00 compute-0 podman[294622]: 2025-12-06 10:25:00.005400646 +0000 UTC m=+0.336978709 container attach 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.068 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:25:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:25:00 compute-0 lvm[294716]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:25:00 compute-0 lvm[294716]: VG ceph_vg0 finished
Dec 06 10:25:00 compute-0 wizardly_neumann[294640]: {}
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.750 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:25:00 compute-0 systemd[1]: libpod-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Deactivated successfully.
Dec 06 10:25:00 compute-0 podman[294622]: 2025-12-06 10:25:00.761601471 +0000 UTC m=+1.093179534 container died 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:25:00 compute-0 systemd[1]: libpod-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Consumed 1.261s CPU time.
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.770 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.770 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:00 compute-0 nova_compute[254819]: 2025-12-06 10:25:00.771 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd86e88c7f7be620c8d397747c97d66ad4fcadb97ae1763d2444d55419a6198b-merged.mount: Deactivated successfully.
Dec 06 10:25:00 compute-0 podman[294622]: 2025-12-06 10:25:00.809410203 +0000 UTC m=+1.140988246 container remove 1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:25:00 compute-0 systemd[1]: libpod-conmon-1db0beed854202f4f6c618dea50f325384f8f60bc0c7d958148dee2a80a6c7a7.scope: Deactivated successfully.
Dec 06 10:25:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:00 compute-0 sudo[294517]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:25:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:25:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:25:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:00 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:25:00 compute-0 sudo[294732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:25:00 compute-0 sudo[294732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:25:01 compute-0 sudo[294732]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:01 compute-0 ceph-mon[74327]: pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:25:01 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:25:01 compute-0 nova_compute[254819]: 2025-12-06 10:25:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:02 compute-0 ceph-mon[74327]: pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:02 compute-0 nova_compute[254819]: 2025-12-06 10:25:02.568 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:04 compute-0 podman[294762]: 2025-12-06 10:25:04.495544488 +0000 UTC m=+0.107781617 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 10:25:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:04.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:05 compute-0 nova_compute[254819]: 2025-12-06 10:25:05.072 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:05 compute-0 ceph-mon[74327]: pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:05 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3124201791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:06 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2298643697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:06 compute-0 ceph-mon[74327]: pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:06.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:07 compute-0 nova_compute[254819]: 2025-12-06 10:25:07.569 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:07.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:08 compute-0 podman[294793]: 2025-12-06 10:25:08.445268492 +0000 UTC m=+0.067938391 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 10:25:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:08.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:25:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:09 compute-0 ceph-mon[74327]: pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:10 compute-0 nova_compute[254819]: 2025-12-06 10:25:10.075 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:10 compute-0 ceph-mon[74327]: pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:10.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:10.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:25:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:25:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:12 compute-0 nova_compute[254819]: 2025-12-06 10:25:12.572 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:12.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:13 compute-0 ceph-mon[74327]: pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:14 compute-0 ceph-mon[74327]: pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:14.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:15 compute-0 nova_compute[254819]: 2025-12-06 10:25:15.105 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:17 compute-0 ceph-mon[74327]: pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:17 compute-0 nova_compute[254819]: 2025-12-06 10:25:17.573 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:17 compute-0 sudo[294821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:25:17 compute-0 sudo[294821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:25:17 compute-0 sudo[294821]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:17.740Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:18 compute-0 ceph-mon[74327]: pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:18.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:20 compute-0 nova_compute[254819]: 2025-12-06 10:25:20.153 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 compute-0 ceph-mon[74327]: pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:20.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:25:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:25:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:22 compute-0 ceph-mon[74327]: pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:22.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:22 compute-0 nova_compute[254819]: 2025-12-06 10:25:22.574 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:25:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:25:23
Dec 06 10:25:23 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:25:23 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:25:23 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['vms', '.nfs', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', '.rgw.root']
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:25:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:25:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:25:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:25:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:25 compute-0 ceph-mon[74327]: pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec 06 10:25:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:25 compute-0 nova_compute[254819]: 2025-12-06 10:25:25.158 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:26 compute-0 ceph-mon[74327]: pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:27 compute-0 nova_compute[254819]: 2025-12-06 10:25:27.576 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:27.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec 06 10:25:28 compute-0 ceph-mon[74327]: pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 0 op/s
Dec 06 10:25:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:28.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:29 compute-0 podman[294858]: 2025-12-06 10:25:29.4563517 +0000 UTC m=+0.079980510 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:25:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:30 compute-0 nova_compute[254819]: 2025-12-06 10:25:30.163 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:30 compute-0 ceph-mon[74327]: pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:25:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Dec 06 10:25:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:32 compute-0 ceph-mon[74327]: pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:32.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:32 compute-0 nova_compute[254819]: 2025-12-06 10:25:32.577 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:34 compute-0 ceph-mon[74327]: pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:34.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:35 compute-0 nova_compute[254819]: 2025-12-06 10:25:35.166 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:35 compute-0 podman[294884]: 2025-12-06 10:25:35.496785892 +0000 UTC m=+0.123692649 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:25:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:36 compute-0 ceph-mon[74327]: pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:37 compute-0 nova_compute[254819]: 2025-12-06 10:25:37.578 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:37.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:37 compute-0 sudo[294914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:25:37 compute-0 sudo[294914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:25:37 compute-0 sudo[294914]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:38 compute-0 ceph-mon[74327]: pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:25:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:39 compute-0 podman[294940]: 2025-12-06 10:25:39.430080778 +0000 UTC m=+0.064058415 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:25:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:40 compute-0 nova_compute[254819]: 2025-12-06 10:25:40.171 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:40 compute-0 ceph-mon[74327]: pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:40] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:42 compute-0 ceph-mon[74327]: pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:42 compute-0 nova_compute[254819]: 2025-12-06 10:25:42.583 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:44 compute-0 ceph-mon[74327]: pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:45 compute-0 nova_compute[254819]: 2025-12-06 10:25:45.175 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:25:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:25:46 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:25:46 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:25:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:25:46 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/2814903871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:25:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:47 compute-0 ceph-mon[74327]: pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:47 compute-0 nova_compute[254819]: 2025-12-06 10:25:47.582 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:47.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:48 compute-0 ceph-mon[74327]: pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:48.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:50 compute-0 nova_compute[254819]: 2025-12-06 10:25:50.178 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:50 compute-0 ceph-mon[74327]: pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:50.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:25:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:25:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:52 compute-0 ceph-mon[74327]: pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:52 compute-0 nova_compute[254819]: 2025-12-06 10:25:52.583 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:52 compute-0 nova_compute[254819]: 2025-12-06 10:25:52.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:53 compute-0 nova_compute[254819]: 2025-12-06 10:25:53.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:25:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:25:54.252 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:54.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:54.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:55 compute-0 ceph-mon[74327]: pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.120 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.121 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.121 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.122 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.182 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:25:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470316468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.602 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.760 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4505MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.761 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.904 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.904 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:25:55 compute-0 nova_compute[254819]: 2025-12-06 10:25:55.919 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing inventories for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.068 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating ProviderTree inventory for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.068 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Updating inventory in ProviderTree for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.099 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing aggregate associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:25:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3470316468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.122 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Refreshing trait associations for resource provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.137 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:25:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:56 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:25:56 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261979015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.578 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.583 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.599 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.600 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:25:56 compute-0 nova_compute[254819]: 2025-12-06 10:25:56.600 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:25:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:25:57 compute-0 ceph-mon[74327]: pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:25:57 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/261979015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:57 compute-0 nova_compute[254819]: 2025-12-06 10:25:57.585 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:25:57.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:25:57 compute-0 sudo[295025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:25:57 compute-0 sudo[295025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:25:57 compute-0 sudo[295025]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:58 compute-0 ceph-mon[74327]: pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:25:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:25:58.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:25:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:25:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:25:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:25:59 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2411554739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:25:59 compute-0 nova_compute[254819]: 2025-12-06 10:25:59.601 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:59 compute-0 nova_compute[254819]: 2025-12-06 10:25:59.602 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:59 compute-0 nova_compute[254819]: 2025-12-06 10:25:59.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:00 compute-0 nova_compute[254819]: 2025-12-06 10:26:00.185 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3387854361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:00 compute-0 ceph-mon[74327]: pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:00 compute-0 podman[295053]: 2025-12-06 10:26:00.447866426 +0000 UTC m=+0.076195687 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:26:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:00 compute-0 nova_compute[254819]: 2025-12-06 10:26:00.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:00 compute-0 nova_compute[254819]: 2025-12-06 10:26:00.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:26:00 compute-0 nova_compute[254819]: 2025-12-06 10:26:00.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:26:00 compute-0 nova_compute[254819]: 2025-12-06 10:26:00.768 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:26:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:26:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:26:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:01 compute-0 sudo[295073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:26:01 compute-0 sudo[295073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:01 compute-0 sudo[295073]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:01 compute-0 sudo[295098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:26:01 compute-0 sudo[295098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:01 compute-0 nova_compute[254819]: 2025-12-06 10:26:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:01 compute-0 nova_compute[254819]: 2025-12-06 10:26:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:01 compute-0 nova_compute[254819]: 2025-12-06 10:26:01.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:01 compute-0 sudo[295098]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:02 compute-0 ceph-mon[74327]: pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:02.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:02 compute-0 nova_compute[254819]: 2025-12-06 10:26:02.587 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:02 compute-0 nova_compute[254819]: 2025-12-06 10:26:02.761 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:02 compute-0 nova_compute[254819]: 2025-12-06 10:26:02.761 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:26:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:02.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec 06 10:26:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec 06 10:26:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec 06 10:26:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec 06 10:26:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:03 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec 06 10:26:03 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:26:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:26:04 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:26:04 compute-0 sudo[295159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:26:04 compute-0 sudo[295159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:04 compute-0 sudo[295159]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:04 compute-0 sudo[295184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:26:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:04.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:04 compute-0 sudo[295184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:04 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:26:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.062631843 +0000 UTC m=+0.055447801 container create f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:26:05 compute-0 systemd[1]: Started libpod-conmon-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope.
Dec 06 10:26:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.035038151 +0000 UTC m=+0.027854119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.168192668 +0000 UTC m=+0.161008606 container init f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.179117876 +0000 UTC m=+0.171933814 container start f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.183507915 +0000 UTC m=+0.176323853 container attach f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 06 10:26:05 compute-0 confident_mendel[295267]: 167 167
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.188 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:05 compute-0 systemd[1]: libpod-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope: Deactivated successfully.
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.190385623 +0000 UTC m=+0.183201551 container died f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:26:05 compute-0 ceph-mon[74327]: pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:26:05 compute-0 ceph-mon[74327]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c2c8a7d002f95aad81858764b60ed7acd0af1f4cba04b2bad552965cc18309-merged.mount: Deactivated successfully.
Dec 06 10:26:05 compute-0 podman[295251]: 2025-12-06 10:26:05.252286749 +0000 UTC m=+0.245102707 container remove f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:26:05 compute-0 systemd[1]: libpod-conmon-f804bfa09677fed5084234c99591862c464af78b5bef749db9f7255ee9afd9c6.scope: Deactivated successfully.
Dec 06 10:26:05 compute-0 podman[295293]: 2025-12-06 10:26:05.414090366 +0000 UTC m=+0.047530206 container create 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:26:05 compute-0 systemd[1]: Started libpod-conmon-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope.
Dec 06 10:26:05 compute-0 podman[295293]: 2025-12-06 10:26:05.394847491 +0000 UTC m=+0.028287351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:05 compute-0 podman[295293]: 2025-12-06 10:26:05.534741882 +0000 UTC m=+0.168181722 container init 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:26:05 compute-0 podman[295293]: 2025-12-06 10:26:05.547374405 +0000 UTC m=+0.180814245 container start 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:26:05 compute-0 podman[295293]: 2025-12-06 10:26:05.551065616 +0000 UTC m=+0.184505456 container attach 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:05 compute-0 nova_compute[254819]: 2025-12-06 10:26:05.772 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:26:05 compute-0 compassionate_lichterman[295310]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:26:05 compute-0 compassionate_lichterman[295310]: --> All data devices are unavailable
Dec 06 10:26:05 compute-0 systemd[1]: libpod-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope: Deactivated successfully.
Dec 06 10:26:05 compute-0 podman[295327]: 2025-12-06 10:26:05.939866885 +0000 UTC m=+0.038337255 container died 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 06 10:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c0b4438dac3664ccbb87c21f9f2be8c6f8dc5e334b29f336421cc721e68dd4-merged.mount: Deactivated successfully.
Dec 06 10:26:05 compute-0 podman[295327]: 2025-12-06 10:26:05.984677405 +0000 UTC m=+0.083147735 container remove 3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:26:05 compute-0 systemd[1]: libpod-conmon-3822d0f19c0eff45f652da2fc11b6e5754d23f6b6ac89be98bb8f38af41743a5.scope: Deactivated successfully.
Dec 06 10:26:06 compute-0 podman[295326]: 2025-12-06 10:26:06.046416437 +0000 UTC m=+0.139664584 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:26:06 compute-0 sudo[295184]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:06 compute-0 sudo[295367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:26:06 compute-0 sudo[295367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:06 compute-0 sudo[295367]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:06 compute-0 sudo[295392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:26:06 compute-0 sudo[295392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:06 compute-0 ceph-mon[74327]: pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:06.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.675739277 +0000 UTC m=+0.043066284 container create c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:26:06 compute-0 systemd[1]: Started libpod-conmon-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope.
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.656503793 +0000 UTC m=+0.023830850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.773111219 +0000 UTC m=+0.140438256 container init c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.787302026 +0000 UTC m=+0.154629043 container start c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.791759636 +0000 UTC m=+0.159086663 container attach c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 06 10:26:06 compute-0 xenodochial_margulis[295475]: 167 167
Dec 06 10:26:06 compute-0 systemd[1]: libpod-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope: Deactivated successfully.
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.796757593 +0000 UTC m=+0.164084610 container died c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-99927a5d9aa45770db80b61c6268ace7b13e3a4c24c94894c48f99e6ad3a785c-merged.mount: Deactivated successfully.
Dec 06 10:26:06 compute-0 podman[295458]: 2025-12-06 10:26:06.838620833 +0000 UTC m=+0.205947850 container remove c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_margulis, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 10:26:06 compute-0 systemd[1]: libpod-conmon-c536aec5d71a2e01d55f035d99532c1585fba3c68caf5cee4e660058b08c9789.scope: Deactivated successfully.
Dec 06 10:26:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.058945344 +0000 UTC m=+0.068912618 container create 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:26:07 compute-0 systemd[1]: Started libpod-conmon-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope.
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.026851659 +0000 UTC m=+0.036819003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.169453683 +0000 UTC m=+0.179420967 container init 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.183118336 +0000 UTC m=+0.193085610 container start 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.187891286 +0000 UTC m=+0.197858610 container attach 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:26:07 compute-0 ceph-mon[74327]: pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:07 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3490516856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:07 compute-0 bold_wu[295515]: {
Dec 06 10:26:07 compute-0 bold_wu[295515]:     "1": [
Dec 06 10:26:07 compute-0 bold_wu[295515]:         {
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "devices": [
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "/dev/loop3"
Dec 06 10:26:07 compute-0 bold_wu[295515]:             ],
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "lv_name": "ceph_lv0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "lv_size": "21470642176",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "name": "ceph_lv0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "tags": {
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.cluster_name": "ceph",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.crush_device_class": "",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.encrypted": "0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.osd_id": "1",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.type": "block",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.vdo": "0",
Dec 06 10:26:07 compute-0 bold_wu[295515]:                 "ceph.with_tpm": "0"
Dec 06 10:26:07 compute-0 bold_wu[295515]:             },
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "type": "block",
Dec 06 10:26:07 compute-0 bold_wu[295515]:             "vg_name": "ceph_vg0"
Dec 06 10:26:07 compute-0 bold_wu[295515]:         }
Dec 06 10:26:07 compute-0 bold_wu[295515]:     ]
Dec 06 10:26:07 compute-0 bold_wu[295515]: }
Dec 06 10:26:07 compute-0 systemd[1]: libpod-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope: Deactivated successfully.
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.497533199 +0000 UTC m=+0.507500463 container died 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec 06 10:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c84f5fa51beb3c7314b14ded408460416d4e0eace3654ed6bf6b24adf8f13c-merged.mount: Deactivated successfully.
Dec 06 10:26:07 compute-0 podman[295498]: 2025-12-06 10:26:07.542583257 +0000 UTC m=+0.552550491 container remove 561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wu, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec 06 10:26:07 compute-0 systemd[1]: libpod-conmon-561271bbab13675bb42ae9070b36e862b430c9ac7ea20b7a2e3fef22a732cf65.scope: Deactivated successfully.
Dec 06 10:26:07 compute-0 sudo[295392]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:07 compute-0 nova_compute[254819]: 2025-12-06 10:26:07.589 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:07 compute-0 sudo[295536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:26:07 compute-0 sudo[295536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:07 compute-0 sudo[295536]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:07 compute-0 sudo[295561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:26:07 compute-0 sudo[295561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:07.746Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.204452523 +0000 UTC m=+0.045828020 container create 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 10:26:08 compute-0 systemd[1]: Started libpod-conmon-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope.
Dec 06 10:26:08 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/239905449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.182647509 +0000 UTC m=+0.024023026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.294040083 +0000 UTC m=+0.135415600 container init 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.299634265 +0000 UTC m=+0.141009762 container start 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.302845072 +0000 UTC m=+0.144220589 container attach 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 06 10:26:08 compute-0 inspiring_haibt[295645]: 167 167
Dec 06 10:26:08 compute-0 systemd[1]: libpod-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope: Deactivated successfully.
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.30531371 +0000 UTC m=+0.146689217 container died 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18611d23e644d3f0126a08d9f17ce709d74f48246331ebf00c2a7f3841f1832-merged.mount: Deactivated successfully.
Dec 06 10:26:08 compute-0 podman[295629]: 2025-12-06 10:26:08.344440685 +0000 UTC m=+0.185816192 container remove 2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 10:26:08 compute-0 systemd[1]: libpod-conmon-2e9202f0ba0220c2a804e7312f53ba59b089423f0828ad8eaea079f1a011a98a.scope: Deactivated successfully.
Dec 06 10:26:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:08 compute-0 podman[295670]: 2025-12-06 10:26:08.537100183 +0000 UTC m=+0.049774437 container create da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Dec 06 10:26:08 compute-0 systemd[1]: Started libpod-conmon-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope.
Dec 06 10:26:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:08.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:26:08 compute-0 podman[295670]: 2025-12-06 10:26:08.517787657 +0000 UTC m=+0.030461961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:26:08 compute-0 podman[295670]: 2025-12-06 10:26:08.621071599 +0000 UTC m=+0.133745873 container init da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec 06 10:26:08 compute-0 podman[295670]: 2025-12-06 10:26:08.628688816 +0000 UTC m=+0.141363070 container start da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 06 10:26:08 compute-0 podman[295670]: 2025-12-06 10:26:08.631619176 +0000 UTC m=+0.144293460 container attach da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 10:26:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:26:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:26:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec 06 10:26:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:26:09 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:09 compute-0 lvm[295760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:26:09 compute-0 lvm[295760]: VG ceph_vg0 finished
Dec 06 10:26:09 compute-0 romantic_rubin[295685]: {}
Dec 06 10:26:09 compute-0 systemd[1]: libpod-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Deactivated successfully.
Dec 06 10:26:09 compute-0 podman[295670]: 2025-12-06 10:26:09.430021151 +0000 UTC m=+0.942695405 container died da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:26:09 compute-0 systemd[1]: libpod-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Consumed 1.252s CPU time.
Dec 06 10:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-11cb8a667a50facf5c70eb077f72610352c08025e6f454db51ab74db3a4a6816-merged.mount: Deactivated successfully.
Dec 06 10:26:09 compute-0 podman[295670]: 2025-12-06 10:26:09.484808874 +0000 UTC m=+0.997483138 container remove da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 06 10:26:09 compute-0 systemd[1]: libpod-conmon-da24901174e0007b07fcc5ae2d263865a2675408e50f1d4c34ab8df4976167ce.scope: Deactivated successfully.
Dec 06 10:26:09 compute-0 sudo[295561]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:26:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:09 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:26:09 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:09 compute-0 podman[295775]: 2025-12-06 10:26:09.589022732 +0000 UTC m=+0.085149480 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 10:26:09 compute-0 sudo[295794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:26:09 compute-0 sudo[295794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:09 compute-0 sudo[295794]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:10 compute-0 ceph-mon[74327]: pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:10 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:10 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:10 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:10 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:26:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:10 compute-0 nova_compute[254819]: 2025-12-06 10:26:10.191 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:26:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:26:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:12 compute-0 ceph-mon[74327]: pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:12.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:12 compute-0 nova_compute[254819]: 2025-12-06 10:26:12.593 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:14 compute-0 ceph-mon[74327]: pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:15 compute-0 nova_compute[254819]: 2025-12-06 10:26:15.195 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:16 compute-0 ceph-mon[74327]: pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 602 B/s rd, 0 op/s
Dec 06 10:26:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:17 compute-0 nova_compute[254819]: 2025-12-06 10:26:17.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:17.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:26:17 compute-0 sudo[295829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:26:17 compute-0 sudo[295829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:17 compute-0 sudo[295829]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:18 compute-0 ceph-mon[74327]: pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:20 compute-0 ceph-mon[74327]: pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.127368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780127438, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 255, "total_data_size": 1982910, "memory_usage": 2005696, "flush_reason": "Manual Compaction"}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780142881, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1942990, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37031, "largest_seqno": 38197, "table_properties": {"data_size": 1937422, "index_size": 2900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12264, "raw_average_key_size": 19, "raw_value_size": 1926054, "raw_average_value_size": 3116, "num_data_blocks": 125, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016684, "oldest_key_time": 1765016684, "file_creation_time": 1765016780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15561 microseconds, and 6165 cpu microseconds.
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.142934) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1942990 bytes OK
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.142956) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144838) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144852) EVENT_LOG_v1 {"time_micros": 1765016780144847, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.144868) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1977606, prev total WAL file size 1977606, number of live WAL files 2.
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.145520) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1897KB)], [80(12MB)]
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780145548, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15268386, "oldest_snapshot_seqno": -1}
Dec 06 10:26:20 compute-0 nova_compute[254819]: 2025-12-06 10:26:20.199 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6851 keys, 15099811 bytes, temperature: kUnknown
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780246004, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15099811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15054753, "index_size": 26834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 180378, "raw_average_key_size": 26, "raw_value_size": 14932111, "raw_average_value_size": 2179, "num_data_blocks": 1056, "num_entries": 6851, "num_filter_entries": 6851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765013861, "oldest_key_time": 0, "file_creation_time": 1765016780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "423e8366-3852-4d2b-aa53-87abab31aff3", "db_session_id": "4WBX5WA2U4DRQ0QUUFCR", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.246303) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15099811 bytes
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.247596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.7 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(15.6) write-amplify(7.8) OK, records in: 7380, records dropped: 529 output_compression: NoCompression
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.247617) EVENT_LOG_v1 {"time_micros": 1765016780247608, "job": 46, "event": "compaction_finished", "compaction_time_micros": 100572, "compaction_time_cpu_micros": 27885, "output_level": 6, "num_output_files": 1, "total_output_size": 15099811, "num_input_records": 7380, "num_output_records": 6851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780248299, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016780251602, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.145430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mon[74327]: rocksdb: (Original Log Time 2025/12/06-10:26:20.251680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:20.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:26:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:20] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Dec 06 10:26:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:22 compute-0 nova_compute[254819]: 2025-12-06 10:26:22.596 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:22.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:22 compute-0 ceph-mon[74327]: pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:22.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:23 compute-0 ceph-mon[74327]: pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:26:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:26:24
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['.nfs', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:26:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:26:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:26:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:24.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:25 compute-0 nova_compute[254819]: 2025-12-06 10:26:25.203 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:25 compute-0 ceph-mon[74327]: pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:27 compute-0 nova_compute[254819]: 2025-12-06 10:26:27.597 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:27.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:26:27 compute-0 ceph-mon[74327]: pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:29 compute-0 ceph-mon[74327]: pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:30 compute-0 nova_compute[254819]: 2025-12-06 10:26:30.207 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:30.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:26:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:26:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:31 compute-0 podman[295867]: 2025-12-06 10:26:31.46264141 +0000 UTC m=+0.091311138 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:26:31 compute-0 ceph-mon[74327]: pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:32 compute-0 nova_compute[254819]: 2025-12-06 10:26:32.599 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:33 compute-0 ceph-mon[74327]: pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:35 compute-0 nova_compute[254819]: 2025-12-06 10:26:35.209 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:35 compute-0 ceph-mon[74327]: pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:36 compute-0 podman[295893]: 2025-12-06 10:26:36.451777591 +0000 UTC m=+0.085090888 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:26:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:37 compute-0 nova_compute[254819]: 2025-12-06 10:26:37.602 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:37.748Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:26:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:37.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:26:37 compute-0 ceph-mon[74327]: pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:38 compute-0 sudo[295921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:26:38 compute-0 sudo[295921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:38 compute-0 sudo[295921]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:38.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:26:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:39 compute-0 ceph-mon[74327]: pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:40 compute-0 nova_compute[254819]: 2025-12-06 10:26:40.235 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:40 compute-0 podman[295949]: 2025-12-06 10:26:40.412331379 +0000 UTC m=+0.044628507 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 06 10:26:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:26:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:40] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:26:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:41 compute-0 ceph-mon[74327]: pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:42 compute-0 nova_compute[254819]: 2025-12-06 10:26:42.601 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:26:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:26:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:43 compute-0 ceph-mon[74327]: pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:45 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:45 compute-0 nova_compute[254819]: 2025-12-06 10:26:45.236 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:46 compute-0 ceph-mon[74327]: pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:46 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:46 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:46 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:46 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/430995373' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 10:26:47 compute-0 ceph-mon[74327]: from='client.? 192.168.122.10:0/430995373' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 10:26:47 compute-0 ceph-mon[74327]: pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:47 compute-0 nova_compute[254819]: 2025-12-06 10:26:47.603 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:47 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:47.750Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:26:48 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:48.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:48 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:48 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:48 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:48.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:49 compute-0 ceph-mon[74327]: pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:50 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:50 compute-0 nova_compute[254819]: 2025-12-06 10:26:50.239 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:50 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:50 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:50] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:26:50 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:26:50] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Dec 06 10:26:50 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:50 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:50 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:51 compute-0 ceph-mon[74327]: pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:52 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:52 compute-0 nova_compute[254819]: 2025-12-06 10:26:52.604 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:52.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:52 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:52 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:52 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:53 compute-0 ceph-mon[74327]: pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:53 compute-0 nova_compute[254819]: 2025-12-06 10:26:53.788 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:53 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:26:53 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.253 162267 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.254 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:54 compute-0 ovn_metadata_agent[162262]: 2025-12-06 10:26:54.254 162267 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:54 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:54 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:26:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 10:26:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:54.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.772 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:26:54 compute-0 nova_compute[254819]: 2025-12-06 10:26:54.773 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:26:54 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:54 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:26:54 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:54.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:26:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:26:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570750853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.195 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.264 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.421 254824 WARNING nova.virt.libvirt.driver [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.422 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.423 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.423 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.487 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.508 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:26:55 compute-0 ceph-mon[74327]: pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:55 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3570750853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:55 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:26:55 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927824637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.978 254824 DEBUG oslo_concurrency.processutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.983 254824 DEBUG nova.compute.provider_tree [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed in ProviderTree for provider: 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:26:55 compute-0 nova_compute[254819]: 2025-12-06 10:26:55.998 254824 DEBUG nova.scheduler.client.report [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Inventory has not changed for provider 06a9c7d1-c74c-47ea-9e97-16acfab6aa88 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:26:56 compute-0 nova_compute[254819]: 2025-12-06 10:26:56.000 254824 DEBUG nova.compute.resource_tracker [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:26:56 compute-0 nova_compute[254819]: 2025-12-06 10:26:56.000 254824 DEBUG oslo_concurrency.lockutils [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:56 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:56 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3927824637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:26:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:56 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:56 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:26:56 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:26:57 compute-0 ceph-mon[74327]: pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:26:57 compute-0 nova_compute[254819]: 2025-12-06 10:26:57.608 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:57 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:26:57.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:26:58 compute-0 nova_compute[254819]: 2025-12-06 10:26:58.000 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:58 compute-0 sudo[296030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:26:58 compute-0 sudo[296030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:26:58 compute-0 sudo[296030]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:58 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:58 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:58 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:26:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:58 compute-0 nova_compute[254819]: 2025-12-06 10:26:58.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:58 compute-0 ceph-mgr[74618]: [dashboard INFO request] [192.168.122.100:50986] [POST] [200] [0.002s] [4.0B] [f7bdf29d-61cb-4152-afc5-f0cad61d43d8] /api/prometheus_receiver
Dec 06 10:26:58 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:26:59 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:26:59 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:26:58.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:26:59 compute-0 ceph-mon[74327]: pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:26:59 compute-0 nova_compute[254819]: 2025-12-06 10:26:59.742 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:00 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:00 compute-0 nova_compute[254819]: 2025-12-06 10:27:00.268 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:00 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:00 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2022403802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:27:00 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:00 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:00 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:00.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:00 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:27:00 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:27:01 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:01 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:01 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:01.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:01 compute-0 ceph-mon[74327]: pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:01 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3629784107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.748 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.768 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:27:01 compute-0 nova_compute[254819]: 2025-12-06 10:27:01.768 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:02 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:02 compute-0 podman[296060]: 2025-12-06 10:27:02.473173707 +0000 UTC m=+0.095382929 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:27:02 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:02 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:02 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:02.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:02 compute-0 nova_compute[254819]: 2025-12-06 10:27:02.646 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:02 compute-0 nova_compute[254819]: 2025-12-06 10:27:02.749 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:02 compute-0 nova_compute[254819]: 2025-12-06 10:27:02.749 254824 DEBUG nova.compute.manager [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:27:03 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:03 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:03 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:03 compute-0 ceph-mon[74327]: pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:03 compute-0 nova_compute[254819]: 2025-12-06 10:27:03.750 254824 DEBUG oslo_service.periodic_task [None req-1ce4d3ef-eae6-46f2-bb1d-b5579ab5d78c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:04 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:04 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:04 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:04 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:04.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:05 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:05 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:05 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:05.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:05 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:05 compute-0 nova_compute[254819]: 2025-12-06 10:27:05.271 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:05 compute-0 ceph-mon[74327]: pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:06 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:06 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:06 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:06 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:06.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:07 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:07 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:07 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:07.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:07 compute-0 podman[296085]: 2025-12-06 10:27:07.490398624 +0000 UTC m=+0.119871116 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:27:07 compute-0 ceph-mon[74327]: pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:07 compute-0 nova_compute[254819]: 2025-12-06 10:27:07.647 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:07.753Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:07 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:07.753Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:08 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:08 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:08 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:08 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:08 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:08.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:27:08 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:27:08 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:09 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:09 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:09 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:09.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:09 compute-0 ceph-mon[74327]: pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:09 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:09 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1908876995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:27:09 compute-0 sudo[296114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:09 compute-0 sudo[296114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:09 compute-0 sudo[296114]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:09 compute-0 sudo[296139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 06 10:27:09 compute-0 sudo[296139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:10 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:10 compute-0 nova_compute[254819]: 2025-12-06 10:27:10.274 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:10 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:10 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:10 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:10 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:10 compute-0 podman[296236]: 2025-12-06 10:27:10.818725763 +0000 UTC m=+0.370272625 container exec 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 06 10:27:10 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:27:10 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:27:10 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3374521652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 10:27:10 compute-0 podman[296236]: 2025-12-06 10:27:10.942910235 +0000 UTC m=+0.494457077 container exec_died 484d6ed1039c50317cf4b6067525b7ed0f8de7c568c9445500e62194ab25d04d (image=quay.io/ceph/ceph:v19, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 06 10:27:11 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:11 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:11 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:11 compute-0 podman[296271]: 2025-12-06 10:27:11.21736282 +0000 UTC m=+0.061301251 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:27:11 compute-0 podman[296373]: 2025-12-06 10:27:11.611511075 +0000 UTC m=+0.070076439 container exec 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:11 compute-0 podman[296373]: 2025-12-06 10:27:11.649047957 +0000 UTC m=+0.107613281 container exec_died 43e1f8986e07f4e6b99d6750812eff4d21013fd9f773d9f6d6eef82549df3333 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:11 compute-0 ceph-mon[74327]: pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:12 compute-0 podman[296508]: 2025-12-06 10:27:12.280848305 +0000 UTC m=+0.071318424 container exec 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:27:12 compute-0 podman[296508]: 2025-12-06 10:27:12.324872164 +0000 UTC m=+0.115342273 container exec_died 0300cb0bc272de309f3d242ba0627369d0948f1b63b3476dccdba4375a8e539d (image=quay.io/ceph/haproxy:2.3, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-haproxy-nfs-cephfs-compute-0-fzuvue)
Dec 06 10:27:12 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:12 compute-0 podman[296575]: 2025-12-06 10:27:12.609612829 +0000 UTC m=+0.078964772 container exec d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git)
Dec 06 10:27:12 compute-0 podman[296575]: 2025-12-06 10:27:12.646948496 +0000 UTC m=+0.116300419 container exec_died d7d5239f75d84aa9a07cad1cdfa31e3b4f3983263aaaa27687e6c7454ab8fe3f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-keepalived-nfs-cephfs-compute-0-ylrrzf, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Dec 06 10:27:12 compute-0 nova_compute[254819]: 2025-12-06 10:27:12.651 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:12 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:12 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:12 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:12.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:12 compute-0 podman[296639]: 2025-12-06 10:27:12.93055874 +0000 UTC m=+0.068850366 container exec b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:12 compute-0 podman[296639]: 2025-12-06 10:27:12.963913999 +0000 UTC m=+0.102205585 container exec_died b0127b2874845862d1ff8231029cda7f8d9811cefe028a677c06060e923a3641 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:13 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:13 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:27:13 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:13.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:27:13 compute-0 podman[296715]: 2025-12-06 10:27:13.213152337 +0000 UTC m=+0.071453048 container exec fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:27:13 compute-0 podman[296715]: 2025-12-06 10:27:13.412732182 +0000 UTC m=+0.271032823 container exec_died fc223e2a5fd06c66f839f6f48305e72a1403c44b345b53752763fbbf064c41b3 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec 06 10:27:13 compute-0 podman[296827]: 2025-12-06 10:27:13.826011598 +0000 UTC m=+0.055178153 container exec cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:13 compute-0 podman[296827]: 2025-12-06 10:27:13.870614712 +0000 UTC m=+0.099781267 container exec_died cfe4d69091434e5154fa760292bba767b8875965fa71cf21268b9ec1632f0d9e (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:27:13 compute-0 sudo[296139]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:27:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:13 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:27:13 compute-0 ceph-mon[74327]: pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:13 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 sudo[296870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:14 compute-0 sudo[296870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:14 compute-0 sudo[296870]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:14 compute-0 sudo[296895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 06 10:27:14 compute-0 sudo[296895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:14 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:14 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:14 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:14.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:14 compute-0 sudo[296895]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:14 compute-0 sudo[296952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:14 compute-0 sudo[296952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:14 compute-0 sudo[296952]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:14 compute-0 ceph-mon[74327]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:27:14 compute-0 sudo[296977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 10:27:14 compute-0 sudo[296977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:15 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:15 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:15 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:15 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:15 compute-0 nova_compute[254819]: 2025-12-06 10:27:15.276 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.494779518 +0000 UTC m=+0.053542799 container create 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:27:15 compute-0 systemd[1]: Started libpod-conmon-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope.
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.468005499 +0000 UTC m=+0.026768830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.593843976 +0000 UTC m=+0.152607287 container init 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.60573158 +0000 UTC m=+0.164494861 container start 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.609100641 +0000 UTC m=+0.167863952 container attach 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:27:15 compute-0 unruffled_rhodes[297057]: 167 167
Dec 06 10:27:15 compute-0 systemd[1]: libpod-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope: Deactivated successfully.
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.616905624 +0000 UTC m=+0.175668915 container died 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a10555b416866b8dd8b3d599abb24c5163f6d3d5949a6cd89daca29d0a3cd467-merged.mount: Deactivated successfully.
Dec 06 10:27:15 compute-0 podman[297041]: 2025-12-06 10:27:15.663370509 +0000 UTC m=+0.222133820 container remove 8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 06 10:27:15 compute-0 systemd[1]: libpod-conmon-8149d085496cb59286c7a611a2a827b058914e0c98b5905f9fa8e617767f3633.scope: Deactivated successfully.
Dec 06 10:27:15 compute-0 podman[297081]: 2025-12-06 10:27:15.869972236 +0000 UTC m=+0.059089160 container create 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:27:15 compute-0 systemd[1]: Started libpod-conmon-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope.
Dec 06 10:27:15 compute-0 podman[297081]: 2025-12-06 10:27:15.843203058 +0000 UTC m=+0.032319972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:15 compute-0 ceph-mon[74327]: pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:15 compute-0 ceph-mon[74327]: pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:15 compute-0 ceph-mon[74327]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 06 10:27:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:15 compute-0 podman[297081]: 2025-12-06 10:27:15.999080483 +0000 UTC m=+0.188197437 container init 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 06 10:27:16 compute-0 sshd-session[297096]: Accepted publickey for zuul from 192.168.122.10 port 56000 ssh2: ECDSA SHA256:r1j7aLsKAM+XxDNbzEU5vWGpGNCOaIBwc7FZdATPttA
Dec 06 10:27:16 compute-0 podman[297081]: 2025-12-06 10:27:16.0121824 +0000 UTC m=+0.201299294 container start 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 10:27:16 compute-0 podman[297081]: 2025-12-06 10:27:16.016045305 +0000 UTC m=+0.205162239 container attach 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 10:27:16 compute-0 systemd-logind[795]: New session 59 of user zuul.
Dec 06 10:27:16 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec 06 10:27:16 compute-0 sshd-session[297096]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 10:27:16 compute-0 sudo[297107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 10:27:16 compute-0 sudo[297107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 10:27:16 compute-0 awesome_hellman[297100]: --> passed data devices: 0 physical, 1 LVM
Dec 06 10:27:16 compute-0 awesome_hellman[297100]: --> All data devices are unavailable
Dec 06 10:27:16 compute-0 podman[297081]: 2025-12-06 10:27:16.411731001 +0000 UTC m=+0.600847925 container died 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:27:16 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:16 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:16 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:16 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:17 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:17 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:17 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:17.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:17 compute-0 systemd[1]: libpod-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope: Deactivated successfully.
Dec 06 10:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da287891f7c633b1c462bfc8c7e3a8347e298ee93da98fd106adffb1dcbf357-merged.mount: Deactivated successfully.
Dec 06 10:27:17 compute-0 podman[297081]: 2025-12-06 10:27:17.237417469 +0000 UTC m=+1.426534353 container remove 213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:27:17 compute-0 systemd[1]: libpod-conmon-213556536d1b3495c82423edc642c1d40e43d2ac698e8a3499f6f1bb64e6a76e.scope: Deactivated successfully.
Dec 06 10:27:17 compute-0 sudo[296977]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:17 compute-0 sudo[297178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:17 compute-0 sudo[297178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:17 compute-0 sudo[297178]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:17 compute-0 sudo[297221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- lvm list --format json
Dec 06 10:27:17 compute-0 sudo[297221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:17 compute-0 nova_compute[254819]: 2025-12-06 10:27:17.688 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:17.754Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:17 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:17.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:27:17 compute-0 podman[297318]: 2025-12-06 10:27:17.935508462 +0000 UTC m=+0.050117665 container create e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:27:17 compute-0 ceph-mon[74327]: pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:17 compute-0 systemd[1]: Started libpod-conmon-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope.
Dec 06 10:27:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:17.917151123 +0000 UTC m=+0.031760356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:18.022557743 +0000 UTC m=+0.137166936 container init e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:18.030853499 +0000 UTC m=+0.145462692 container start e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:18.034127219 +0000 UTC m=+0.148736452 container attach e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 06 10:27:18 compute-0 heuristic_wu[297343]: 167 167
Dec 06 10:27:18 compute-0 systemd[1]: libpod-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope: Deactivated successfully.
Dec 06 10:27:18 compute-0 conmon[297343]: conmon e18a40b6e9a29ac816f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope/container/memory.events
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:18.037883001 +0000 UTC m=+0.152492194 container died e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e6cb91fc3e87dbc66fca98a719e3adf8a2a57eceb010f84846b5609890824f-merged.mount: Deactivated successfully.
Dec 06 10:27:18 compute-0 podman[297318]: 2025-12-06 10:27:18.074329263 +0000 UTC m=+0.188938456 container remove e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wu, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:27:18 compute-0 systemd[1]: libpod-conmon-e18a40b6e9a29ac816f89ab0addcb5fb6c0d86a563a239f68d02d9b73a78e87e.scope: Deactivated successfully.
Dec 06 10:27:18 compute-0 sudo[297371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:27:18 compute-0 sudo[297371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:18 compute-0 sudo[297371]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.248346113 +0000 UTC m=+0.048559424 container create fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:27:18 compute-0 systemd[1]: Started libpod-conmon-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope.
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.226057266 +0000 UTC m=+0.026270677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.345092498 +0000 UTC m=+0.145305839 container init fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.352590132 +0000 UTC m=+0.152803443 container start fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.357548097 +0000 UTC m=+0.157761408 container attach fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 10:27:18 compute-0 agitated_shirley[297438]: {
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:     "1": [
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:         {
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "devices": [
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "/dev/loop3"
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             ],
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "lv_name": "ceph_lv0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "lv_size": "21470642176",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5ecd3f74-dade-5fc4-92ce-8950ae424258,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=7899c4d8-edb4-4836-b838-c4aa702ad7af,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "lv_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "name": "ceph_lv0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "tags": {
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.block_uuid": "a55pcS-v3Vt-CJ4W-fqCV-Baze-9VlY-jG9prS",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.cluster_fsid": "5ecd3f74-dade-5fc4-92ce-8950ae424258",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.cluster_name": "ceph",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.crush_device_class": "",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.encrypted": "0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.osd_fsid": "7899c4d8-edb4-4836-b838-c4aa702ad7af",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.osd_id": "1",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.type": "block",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.vdo": "0",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:                 "ceph.with_tpm": "0"
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             },
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "type": "block",
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:             "vg_name": "ceph_vg0"
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:         }
Dec 06 10:27:18 compute-0 agitated_shirley[297438]:     ]
Dec 06 10:27:18 compute-0 agitated_shirley[297438]: }
Dec 06 10:27:18 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26756 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:18 compute-0 systemd[1]: libpod-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope: Deactivated successfully.
Dec 06 10:27:18 compute-0 conmon[297438]: conmon fe599612256301cc7e52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope/container/memory.events
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.661454784 +0000 UTC m=+0.461668125 container died fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 06 10:27:18 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:18 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:18 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:18 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28039 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f042f814e3b5905d6f94ad0a89229d5af24b40bb82a5d42a7649aeb1f5856888-merged.mount: Deactivated successfully.
Dec 06 10:27:18 compute-0 podman[297381]: 2025-12-06 10:27:18.730798192 +0000 UTC m=+0.531011543 container remove fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 10:27:18 compute-0 systemd[1]: libpod-conmon-fe599612256301cc7e525e09607da9a179e33c618bf66ad2e6497bc0312377ac.scope: Deactivated successfully.
Dec 06 10:27:18 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18615 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:18 compute-0 sudo[297221]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:18 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:18 compute-0 sudo[297509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:18 compute-0 sudo[297509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:18 compute-0 sudo[297509]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:18 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:27:18 compute-0 sudo[297537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5ecd3f74-dade-5fc4-92ce-8950ae424258/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5ecd3f74-dade-5fc4-92ce-8950ae424258 -- raw list --format json
Dec 06 10:27:18 compute-0 sudo[297537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:19 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:19 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:19 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:19.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:19 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26765 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28045 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18624 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.341631018 +0000 UTC m=+0.043611838 container create 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:27:19 compute-0 systemd[1]: Started libpod-conmon-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope.
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.323918646 +0000 UTC m=+0.025899486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.441320204 +0000 UTC m=+0.143301054 container init 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.44923826 +0000 UTC m=+0.151219070 container start 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.452799557 +0000 UTC m=+0.154780407 container attach 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 10:27:19 compute-0 epic_euler[297648]: 167 167
Dec 06 10:27:19 compute-0 systemd[1]: libpod-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope: Deactivated successfully.
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.454978666 +0000 UTC m=+0.156959536 container died 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 06 10:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7178f928b1232a6bbfbf05efb44b66b6e99e266f44851b7d889e100752a48bca-merged.mount: Deactivated successfully.
Dec 06 10:27:19 compute-0 podman[297625]: 2025-12-06 10:27:19.502627444 +0000 UTC m=+0.204608284 container remove 76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 10:27:19 compute-0 systemd[1]: libpod-conmon-76549c2f6f1aeb3bbe5e9bf596751a6e888e79a8508488132b3063bc49771734.scope: Deactivated successfully.
Dec 06 10:27:19 compute-0 podman[297688]: 2025-12-06 10:27:19.678690649 +0000 UTC m=+0.046763965 container create b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:27:19 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 06 10:27:19 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753713519' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:27:19 compute-0 systemd[1]: Started libpod-conmon-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope.
Dec 06 10:27:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 10:27:19 compute-0 podman[297688]: 2025-12-06 10:27:19.656886285 +0000 UTC m=+0.024959631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 06 10:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:27:19 compute-0 podman[297688]: 2025-12-06 10:27:19.767781265 +0000 UTC m=+0.135854581 container init b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:27:19 compute-0 podman[297688]: 2025-12-06 10:27:19.774065397 +0000 UTC m=+0.142138693 container start b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 06 10:27:19 compute-0 podman[297688]: 2025-12-06 10:27:19.776856053 +0000 UTC m=+0.144929349 container attach b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.26756 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.28039 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.18615 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.26765 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/725242139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/753713519' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:27:19 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3613403269' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 10:27:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:20 compute-0 nova_compute[254819]: 2025-12-06 10:27:20.279 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:20 compute-0 lvm[297834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:27:20 compute-0 lvm[297834]: VG ceph_vg0 finished
Dec 06 10:27:20 compute-0 lucid_boyd[297707]: {}
Dec 06 10:27:20 compute-0 systemd[1]: libpod-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Deactivated successfully.
Dec 06 10:27:20 compute-0 podman[297688]: 2025-12-06 10:27:20.585143746 +0000 UTC m=+0.953217062 container died b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 10:27:20 compute-0 systemd[1]: libpod-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Consumed 1.173s CPU time.
Dec 06 10:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd4b00c0e1b16b77cfa4c3275499bc340068323dca82e5e016565a82e100b652-merged.mount: Deactivated successfully.
Dec 06 10:27:20 compute-0 podman[297688]: 2025-12-06 10:27:20.63778714 +0000 UTC m=+1.005860456 container remove b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 06 10:27:20 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:20 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:20 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:20 compute-0 systemd[1]: libpod-conmon-b2f4f85131e2f5c997ab3572f36059ea6e90640aa7cf4fa62dc61b6695c15eb5.scope: Deactivated successfully.
Dec 06 10:27:20 compute-0 sudo[297537]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 06 10:27:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:20 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 06 10:27:20 compute-0 ceph-mon[74327]: log_channel(audit) log [INF] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:20 compute-0 sudo[297863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:27:20 compute-0 sudo[297863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:20 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:20 compute-0 sudo[297863]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:20 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:27:20 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Dec 06 10:27:21 compute-0 ceph-mon[74327]: from='client.28045 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:21 compute-0 ceph-mon[74327]: from='client.18624 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:21 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:21 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' 
Dec 06 10:27:21 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:21 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:21 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:21.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:22 compute-0 ceph-mon[74327]: pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:22 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:22 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:22 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:22 compute-0 nova_compute[254819]: 2025-12-06 10:27:22.689 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:22 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:22 compute-0 ovs-vsctl[297919]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 10:27:23 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:23 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:23 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:23.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 10:27:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 10:27:23 compute-0 virtqemud[254445]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 10:27:23 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:27:23 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:24 compute-0 ceph-mon[74327]: pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:24 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [balancer INFO root] Optimize plan auto_2025-12-06_10:27:24
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [balancer INFO root] do_upmap
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'backups', 'default.rgw.meta', '.nfs', '.rgw.root']
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [balancer INFO root] prepared 0/10 upmap changes
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: cache status {prefix=cache status} (starting...)
Dec 06 10:27:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: client ls {prefix=client ls} (starting...)
Dec 06 10:27:24 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:24 compute-0 lvm[298256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:27:24 compute-0 lvm[298256]: VG ceph_vg0 finished
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:27:24 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:24 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:24 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28060 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:24 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:25 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:25 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:25 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:27:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26783 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28072 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28078 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 nova_compute[254819]: 2025-12-06 10:27:25.281 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:27:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493359472' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:27:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18669 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28105 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:27:25 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789204808' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 10:27:25 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18687 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:25 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.28060 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 0 op/s
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.26783 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2556019190' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.28072 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.28078 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2493359472' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1297629055' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2975038871' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1789204808' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2998337513' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1843947048' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 06 10:27:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645085107' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26822 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18699 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: ops {prefix=ops} (starting...)
Dec 06 10:27:26 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 06 10:27:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011143574' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:26 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:26 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:27:26 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:26.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:27:26 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:26 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 06 10:27:26 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308889397' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:27:26 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28174 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:27 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:27 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:27.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:27 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28180 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.18669 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.28105 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.26813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.18687 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3645085107' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/878022195' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3661082603' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2198453763' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4011143574' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2966237147' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/118631599' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3308889397' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3347068796' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: session ls {prefix=session ls} (starting...)
Dec 06 10:27:27 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui Can't run that command on an inactive MDS!
Dec 06 10:27:27 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18738 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mds[95272]: mds.cephfs.compute-0.ujokui asok_command: status {prefix=status} (starting...)
Dec 06 10:27:27 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:27:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26870 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 06 10:27:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505130251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:27 compute-0 nova_compute[254819]: 2025-12-06 10:27:27.721 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:27 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:27.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Dec 06 10:27:27 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:27:27 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837080414' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.26822 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.18699 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.28162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.28174 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.28180 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.18738 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/780400855' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1082146717' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3948499551' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/98991412' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2396899628' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3505130251' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2140146032' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/837080414' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2585617435' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2579997392' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329383087' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2397306980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859338692' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28252 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.416+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683697799' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18834 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.623+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1751130465' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:27:28 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:28 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:28 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:28 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: 2025-12-06T10:27:28.911+0000 7f35ec3cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mgr[74618]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:27:28 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 06 10:27:28 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3275467326' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:27:29 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:29 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:29 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:29.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 06 10:27:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980335214' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 06 10:27:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472944739' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.26870 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2329383087' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3061321017' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2397306980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3859338692' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2768020805' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/389558928' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3683697799' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1751130465' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3275467326' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4226561449' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/980335214' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/472944739' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18879 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:27:29 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837912887' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26963 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:29 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18894 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 06 10:27:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240077914' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.28252 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.18834 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.26921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/933451133' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/409247307' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4135499701' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1023966935' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2837912887' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2687712743' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2133887337' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2104238276' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/4240077914' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 nova_compute[254819]: 2025-12-06 10:27:30.284 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28342 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18924 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 06 10:27:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713374450' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.26987 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:30 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:30 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:47.915112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:48.915287+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 3604480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:49.915513+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:50.915731+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990082 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:51.915947+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:52.916093+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205703735s of 12.221550941s, submitted: 3
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:53.916329+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:54.916533+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:55.916682+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:56.916827+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:57.916966+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:58.917097+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:54:59.917245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:00.917411+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:01.917618+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:02.917774+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:03.917908+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:04.918073+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:05.918215+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:06.918363+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:07.918525+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:08.918755+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:09.918940+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:10.919104+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:11.919411+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:12.919582+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:13.919762+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:14.919958+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:15.920108+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:16.920276+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:17.920406+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:18.920544+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:19.920703+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:20.920856+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:21.922420+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:22.922595+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:23.922759+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:24.922878+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:25.923082+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:26.923257+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:27.923561+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:28.923743+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:29.923881+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:30.924022+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:31.924207+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf3a9000 session 0x55fce0e892c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2068800 session 0x55fce245f4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:32.924538+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:33.924790+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:34.924958+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:35.925157+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:36.925394+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:37.925564+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:38.925739+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:39.925881+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:40.926043+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989359 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:41.926255+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.344425201s of 49.353523254s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:42.926422+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:43.926623+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:44.926798+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:45.926982+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce22ff860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce1ed5680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:46.927119+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:47.927263+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:48.927402+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:49.927548+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:50.927723+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:51.927882+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:52.928031+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:53.928172+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:54.928416+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:55.928655+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 3596288 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:56.928839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083337784s of 14.100935936s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989623 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:57.928965+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:58.929111+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:55:59.929253+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:00.929534+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:01.929793+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989491 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:02.929959+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:03.930120+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:04.930294+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:05.930446+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:06.930844+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:07.930987+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:08.931117+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:09.931215+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:10.931392+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.310416222s of 14.321680069s, submitted: 3
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:11.931565+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:12.931703+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:13.931932+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:14.932122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:15.932299+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:16.932470+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:17.932637+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:18.932778+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:19.932923+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:20.933087+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:21.933306+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:22.935367+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:23.935521+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:24.935712+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:25.935862+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:26.936020+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:27.936182+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:28.936337+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:29.936461+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:30.936682+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:31.936876+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:32.937009+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:33.937103+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:34.937242+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:35.937390+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:36.937536+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:37.937757+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:38.937960+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:39.938112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:40.938230+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:41.938392+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:42.938542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:43.938669+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:44.938831+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:45.938982+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:46.939123+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:47.939233+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:48.939371+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:49.939473+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:50.939617+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:51.939815+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:52.939989+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:53.940108+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:54.940233+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:55.940365+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:56.940529+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:57.940677+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:58.940835+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:56:59.940974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:00.941145+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:01.941332+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:02.941562+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:03.941700+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2135800 session 0x55fce1f0ab40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:04.941866+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:05.942005+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:06.942165+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:07.942291+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:08.942551+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:09.942691+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:10.942863+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:11.943077+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988768 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:12.943244+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:13.943399+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:14.943597+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf1d9800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.857799530s of 63.862625122s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:15.943835+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:16.943974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988900 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:17.944147+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:18.944303+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:19.944547+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:20.944691+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:21.944902+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:22.945068+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18942 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:23.945247+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:24.945409+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:25.945549+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:26.945752+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:27.945964+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:28.946129+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:29.946316+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:30.946513+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.674135208s of 16.811328888s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:31.946677+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:32.946858+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:33.946997+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 3571712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:34.947136+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:35.947343+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:36.947527+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:37.947668+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:38.947817+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:39.948032+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:40.948207+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:41.948390+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:42.948598+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:43.948781+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:44.948964+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:45.949112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:46.949290+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:47.949515+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:48.949666+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:49.949809+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:50.949977+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:51.950129+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:52.950280+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:53.950428+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:54.950586+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:55.950734+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:56.950905+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 3563520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce1e64f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f6f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:57.951047+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:58.951197+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:57:59.951346+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:00.951548+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:01.951718+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:02.951941+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:03.952130+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:04.952320+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:05.952457+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:06.952611+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:07.952781+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:08.952941+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:09.953134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:10.953328+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:11.953597+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:12.953769+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:13.953944+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:14.954175+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:15.954313+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:16.954688+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:17.954847+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:18.955009+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:19.955225+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:20.955732+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:21.955960+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:22.956172+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:23.956335+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:24.956592+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:25.956749+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:26.956915+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:27.957130+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:28.957317+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:29.957542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:30.957755+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:31.958040+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:32.958213+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:33.958352+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:34.958545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:35.958698+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:36.958841+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:37.958984+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:38.959116+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:39.959241+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:40.959379+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:41.959575+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:42.959819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:43.959980+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:44.960137+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:45.960298+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:46.960576+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:47.960734+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:48.960923+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:49.961194+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:50.961404+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:51.961690+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:52.961883+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:53.962021+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:54.962172+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:55.962331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:56.962509+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:57.962681+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:58.962827+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:58:59.962993+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:00.963163+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:01.963397+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:02.963602+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:03.963753+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:04.963872+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:05.964006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:06.964128+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9187 writes, 35K keys, 9187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9187 writes, 2104 syncs, 4.37 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1212 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 372 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7da9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fcdd7db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:07.964303+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:08.964447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:09.964610+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:10.964775+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:11.965036+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:12.965175+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:13.965310+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:14.965450+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212a400 session 0x55fce2304000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce2128400 session 0x55fce232eb40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:15.965585+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:16.965701+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:17.965803+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:18.965924+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:19.966062+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:20.966192+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:21.966375+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990280 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:22.966538+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:23.966733+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:24.966904+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:25.967079+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 114.850975037s of 114.855049133s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:26.967245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:27.967454+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:28.967676+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:29.967863+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:30.968079+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:31.968297+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990412 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:32.968538+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:33.968722+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:34.968893+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:35.969026+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:36.969172+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991924 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:37.969306+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075698853s of 12.083848000s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:38.969555+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:39.969693+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:40.969819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:41.969984+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:42.970164+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:43.970343+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:44.970495+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:45.970682+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:46.970833+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:47.970988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:48.971174+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:49.971330+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:50.971462+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:51.971674+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:52.971845+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:53.972016+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:54.972162+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:55.972350+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:56.972563+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:57.972699+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:58.972848+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000044s
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T09:59:59.972991+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:00.973118+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:01.973344+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:02.973575+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:03.973720+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:04.973846+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:05.973991+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:06.974115+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:07.974256+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:08.974393+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:09.974537+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:10.974691+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:11.974924+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:12.975060+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:13.975206+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:14.975369+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:15.975552+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:16.975681+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:17.975871+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:18.976034+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:19.976235+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:20.976420+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:21.976686+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:22.976843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:23.977039+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:24.977204+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:25.977368+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:26.977565+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:27.977720+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:28.977851+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:29.978035+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:30.978228+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:31.978424+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:32.978724+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:33.978868+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:34.979049+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:35.979220+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:36.979403+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:37.979550+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:38.979702+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:39.979862+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:40.980061+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:41.980255+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:42.980433+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:43.980664+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:44.980812+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:45.981042+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:46.981235+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:47.981380+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:48.981583+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:49.981756+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:50.981896+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:51.982087+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:52.982259+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:53.982413+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:54.982558+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:55.982729+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:56.982871+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:57.983013+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:58.983187+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:00:59.983354+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:00.983513+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce1f87400 session 0x55fce23ebe00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fce212f800 session 0x55fce0f85680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:01.983689+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:02.983843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:03.984018+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:04.984213+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:05.984418+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:06.984583+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:07.984774+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991201 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:08.984920+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:09.985101+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:10.985255+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:11.985447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.896926880s of 93.672317505s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:12.985602+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991333 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:13.985749+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:14.985909+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:15.986050+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:16.986196+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:17.986333+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:18.986545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:19.986737+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:20.986904+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:21.987093+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:22.987261+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992845 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:23.987452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.079085350s of 12.086176872s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:24.987653+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:25.987791+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:26.987994+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:27.988187+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:28.988438+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:29.988598+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:30.989057+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 2211840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:31.989226+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:32.989415+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:33.989668+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:34.989851+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:35.989981+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:36.990179+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:37.990251+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:38.990382+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:39.990570+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:40.990730+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:41.990936+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:42.991063+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:43.991264+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:44.991407+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:45.991551+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:46.991729+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:47.991987+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:48.992121+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:49.992302+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:50.992525+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:51.992707+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:52.992839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:53.992975+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:54.993122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:55.993269+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:56.993421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:57.993564+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:58.993725+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:01:59.993868+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:00.994014+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:01.994257+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:02.994441+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:03.994557+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:04.994696+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:05.994869+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:06.995087+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:07.995264+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:08.995420+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:09.995573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:10.995742+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:11.995983+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:12.996164+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:13.996342+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 ms_handle_reset con 0x55fcdf1d9800 session 0x55fce0e87c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:14.996539+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:15.996707+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:16.996891+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:17.997057+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:18.997165+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:19.997295+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:20.997451+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:21.997695+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:22.997843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992122 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:23.998012+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:24.998154+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f87400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.565917969s of 60.611633301s, submitted: 340
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:25.998293+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:26.998422+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:27.998573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:28.998765+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:29.998955+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:30.999072+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:31.999254+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:32.999360+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993766 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:33.999548+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:34.999667+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:35.999797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:36.999923+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:38.000099+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993175 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:39.000292+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:40.000440+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.864642143s of 14.892098427s, submitted: 3
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:41.000631+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:42.000801+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:43.000934+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:44.001098+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:45.001262+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:46.001448+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:47.001612+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:48.001798+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:49.002008+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:50.002165+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:51.002296+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:52.002452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:53.002664+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:54.002871+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:55.003052+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:56.003253+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:57.003397+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:58.003534+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:02:59.003684+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:00.003853+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:01.004000+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:02.004149+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:03.004320+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:04.004539+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:05.004701+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:06.004839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:07.004974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:08.005159+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:09.005346+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:10.005537+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:11.005690+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:12.005862+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:13.005991+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:14.006146+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:15.006300+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:16.006430+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:17.006549+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:18.006697+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:19.006896+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:20.007037+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:21.007315+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:22.007622+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:23.007819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:24.008002+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:25.008182+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:26.008445+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:27.008705+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:28.008910+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:29.009122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:30.009393+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:31.009617+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:32.010001+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:33.010215+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:34.010409+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:35.010588+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:36.010740+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:37.010901+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:38.011090+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:39.011316+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:40.011499+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:41.011774+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:42.012090+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:43.012332+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:44.012646+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:45.012921+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:46.013186+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:47.013460+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:48.013703+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:49.013968+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:50.014245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:51.014639+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:52.014936+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:53.015171+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:54.015396+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:55.015645+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:56.015907+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:57.016234+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:58.016602+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:03:59.016887+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:00.017247+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:01.017510+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:02.017733+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:03.017945+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:04.018161+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:05.018369+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:06.018616+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:07.018858+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:08.019155+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993043 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:09.019387+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:10.019606+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc5e0000/0x0/0x4ffc00000, data 0x16f4db/0x22c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:11.019828+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:12.020080+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:13.020301+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 93.336196899s of 93.339126587s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996809 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:14.020542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 2121728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:15.020820+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 18792448 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 148 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1c2b40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:16.021151+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 18784256 heap: 105873408 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x973707/0xa32000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [1])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:17.021331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 27164672 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:18.021457+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 27156480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x1175832/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 150 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115338 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:19.021579+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:20.021725+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:21.021863+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x117793a/0x1239000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:22.022035+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 27123712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:23.022159+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:24.022297+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:25.022570+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:26.022729+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:27.022878+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:28.023010+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:29.023202+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:30.023372+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:31.023550+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:32.023818+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:33.023974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:34.024165+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:35.024335+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:36.024494+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb9e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2116000 session 0x55fce1c6cb40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:37.024646+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:38.024782+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:39.024966+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:40.025177+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f87400 session 0x55fcdfbac1e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:41.025383+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:42.025596+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:43.025750+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:44.025903+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117172 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:45.026138+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:46.026319+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 26075136 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.056060791s of 33.534233093s, submitted: 52
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:47.026527+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:48.026688+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:49.026853+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117304 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:50.027026+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5cf000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:51.027205+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:52.027434+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:53.027671+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:54.027825+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:55.028020+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:56.028277+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:57.028468+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:58.028676+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:04:59.028832+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118108 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.519258499s of 12.529978752s, submitted: 3
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:00.028978+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:01.029112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:02.029271+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 26058752 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:03.029422+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce212f800 session 0x55fce2305a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2136000 session 0x55fce0f843c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f8b400 session 0x55fce236e000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:04.029654+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5d0000/0x0/0x4ffc00000, data 0x117990c/0x123c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120409 data_alloc: 218103808 data_used: 270336
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 26042368 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e9ba40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:05.029851+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 18022400 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2069400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 ms_handle_reset con 0x55fce2069400 session 0x55fce23052c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _renew_subs
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:06.030062+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 18038784 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23c0b40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdf19e5a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c1c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:07.030200+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce0f87860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88400 session 0x55fcdfbb63c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f88c00 session 0x55fce23bda40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:08.030330+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:09.030500+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177895 data_alloc: 218103808 data_used: 7086080
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce1f8b400 session 0x55fce112d680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1fc000/0x0/0x4ffc00000, data 0x1547bbd/0x160f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:10.030645+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce212f800 session 0x55fce23c01e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 17702912 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.694509506s of 10.920597076s, submitted: 65
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 ms_handle_reset con 0x55fce2136000 session 0x55fce1e9a960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:11.030810+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f86c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:12.031001+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb1d8000/0x0/0x4ffc00000, data 0x156bbcd/0x1634000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 17358848 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:13.031144+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:14.031285+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:15.031427+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13844480 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:16.031564+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:17.032250+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128400 session 0x55fcdf1e0b40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce22ff4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:18.032526+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:19.032728+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211407 data_alloc: 234881024 data_used: 11067392
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:20.033505+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:21.033666+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb1d4000/0x0/0x4ffc00000, data 0x156db9f/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:22.033864+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13811712 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:23.034014+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.842867851s of 12.866385460s, submitted: 19
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 12738560 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:24.034186+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239575 data_alloc: 234881024 data_used: 11247616
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 12148736 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:25.034315+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:26.034532+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:27.034685+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:28.034887+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:29.035042+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243807 data_alloc: 234881024 data_used: 11247616
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:30.035189+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 12558336 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:31.035324+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:32.035523+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:33.035711+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.411628723s of 10.562047958s, submitted: 38
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:34.035992+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247415 data_alloc: 234881024 data_used: 11251712
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:35.036202+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:36.036398+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:37.036556+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:38.036686+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:39.036836+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246824 data_alloc: 234881024 data_used: 11251712
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:40.036988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:41.037222+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101720064 unmapped: 12550144 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:42.037445+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:43.037608+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:44.037755+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246101 data_alloc: 234881024 data_used: 11251712
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 12541952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212f800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212f800 session 0x55fce112c3c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:45.037948+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 11771904 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2131800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780336380s of 11.795572281s, submitted: 4
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fae89000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2131800 session 0x55fce0f852c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136c00 session 0x55fce1e9be00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f86400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86400 session 0x55fce0f803c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:46.038108+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1a01a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d800 session 0x55fcdeddcf00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b53000/0x0/0x4ffc00000, data 0x1a4fb9f/0x1b19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:47.038299+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 9502720 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:48.038437+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:49.038630+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312002 data_alloc: 234881024 data_used: 11780096
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:50.038797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce1f0b2c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:51.038981+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:52.039187+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce245d4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:53.039346+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94d0000/0x0/0x4ffc00000, data 0x20d2b9f/0x219c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9494528 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:54.039525+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1efd800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efd800 session 0x55fce0f841e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1e0f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314613 data_alloc: 234881024 data_used: 11780096
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:55.039676+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 9469952 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:56.039803+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2572288 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:57.039971+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:58.040104+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:05:59.040341+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:00.040526+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:01.040696+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:02.040849+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:03.040995+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2564096 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ce000/0x0/0x4ffc00000, data 0x20d2bd2/0x219e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:04.041136+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364145 data_alloc: 234881024 data_used: 19132416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:05.041281+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 2531328 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce1c6de00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1e64000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:06.041421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.690454483s of 20.819118500s, submitted: 30
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 958464 heap: 114270208 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:07.041591+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114335744 unmapped: 3080192 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:08.041711+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 2711552 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:09.041895+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418941 data_alloc: 234881024 data_used: 19755008
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:10.042069+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 2678784 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:11.042245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb6000/0x0/0x4ffc00000, data 0x26e9bd2/0x27b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:12.042457+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 2646016 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:13.042639+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:14.042833+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417293 data_alloc: 234881024 data_used: 19755008
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:15.042966+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:16.043107+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8eb4000/0x0/0x4ffc00000, data 0x26ecbd2/0x27b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:17.043271+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.662245750s of 10.842704773s, submitted: 64
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce0f80f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce19c1a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114417664 unmapped: 2998272 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:18.043387+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce23bc3c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:19.043635+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258778 data_alloc: 234881024 data_used: 11780096
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fcdfbad680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce1e65680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:20.043792+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:21.043876+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:22.044109+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 8036352 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:23.044268+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf19fe00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f86c00 session 0x55fce1c6d2c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 8028160 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce8000/0x0/0x4ffc00000, data 0x18b9b9f/0x1983000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:24.044436+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2305860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:25.044565+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:26.044794+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:27.044979+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:28.045132+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:29.045347+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173167 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:30.045576+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.095330238s of 13.319671631s, submitted: 68
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:31.045704+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:32.045890+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:33.046040+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:34.046187+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172284 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:35.046367+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:36.046507+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:37.046619+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:38.046793+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:39.046942+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:40.047101+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:41.047261+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:42.047473+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:43.047707+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:44.047878+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171693 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:45.048066+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:46.048233+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.733273506s of 15.748806000s, submitted: 4
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:47.048361+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:48.048615+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:49.048763+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171561 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 11583488 heap: 117415936 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:50.048939+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1f0a1e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fce1f0a5a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fce1f0b0e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fcdeddd860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8b400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8b400 session 0x55fce19ff680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:51.049090+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:52.049333+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:53.049470+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfbb7680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:54.049643+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260916 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f88c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f88c00 session 0x55fcdf1c2f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:55.049793+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 27549696 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98e3000/0x0/0x4ffc00000, data 0x1cc2b1d/0x1d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:56.049938+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f8a000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f8a000 session 0x55fcdfbb6d20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1dad20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.256204605s of 10.393723488s, submitted: 26
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:57.050099+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 27320320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:58.050222+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:06:59.050377+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:00.050521+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 21430272 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:01.050713+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:02.050877+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:03.051030+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:04.051168+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340884 data_alloc: 234881024 data_used: 19124224
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:05.051315+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x1ce6b1d/0x1dad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:06.051461+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:07.051655+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:08.051831+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 21397504 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.100857735s of 12.104346275s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:09.051953+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407304 data_alloc: 234881024 data_used: 19488768
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20488192 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:10.052262+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9033000/0x0/0x4ffc00000, data 0x2572b1d/0x2639000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:11.052395+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:12.052575+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:13.052762+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 18866176 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:14.052956+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415702 data_alloc: 234881024 data_used: 19476480
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 18857984 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:15.053122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:16.053271+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:17.053409+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:18.053525+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:19.053663+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414798 data_alloc: 234881024 data_used: 19476480
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:20.053882+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 18563072 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:21.054072+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x2611b1d/0x26d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.115612030s of 12.360255241s, submitted: 80
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:22.054260+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:23.054418+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:24.054606+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414878 data_alloc: 234881024 data_used: 19476480
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:25.054796+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:26.054985+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 18505728 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8e000/0x0/0x4ffc00000, data 0x2617b1d/0x26de000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:27.055152+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:28.055260+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:29.055454+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414966 data_alloc: 234881024 data_used: 19476480
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:30.055571+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 18644992 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:31.055744+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 18628608 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:32.056143+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8f8b000/0x0/0x4ffc00000, data 0x261ab1d/0x26e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:33.056327+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 18587648 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:34.056537+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.287870407s of 13.304501534s, submitted: 4
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415814 data_alloc: 234881024 data_used: 19484672
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdf1d6960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdfe7ed20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 18407424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:35.056778+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19fe000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:36.059286+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:37.059424+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:38.059578+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:39.059713+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:40.059843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:41.059972+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:42.060139+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:43.060286+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:44.060654+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:45.060796+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:46.060936+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:47.061077+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:48.061244+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:49.061411+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:50.061588+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:51.061722+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:52.061932+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:53.062130+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:54.062304+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:55.062545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:56.062775+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:57.062974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:58.063135+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:07:59.063356+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184202 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:00.063547+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:01.063704+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:02.063908+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:03.064061+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 25714688 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.696674347s of 28.839307785s, submitted: 37
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce232f0e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1c2000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfe7f4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce2101c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce20f7680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:04.064353+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191346 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:05.064509+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:06.064786+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce20f70e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f90000/0x0/0x4ffc00000, data 0x1205b1d/0x12cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:07.065259+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6780
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce20f6960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:08.065443+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:09.065636+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 25993216 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193160 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f74a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:10.065791+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce210ad20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce23ea960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:11.065966+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 26378240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:12.066134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:13.066326+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:14.066591+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:15.066725+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:16.066892+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:17.067022+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:18.067211+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:19.067614+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199796 data_alloc: 218103808 data_used: 8167424
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:20.067764+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.855402946s of 17.599184036s, submitted: 5
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:21.067894+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:22.068104+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 26296320 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1229b2d/0x12f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:23.068292+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 25788416 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:24.068452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:25.068759+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:26.069006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:27.069195+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25575424 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:28.069390+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:29.069586+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226656 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:30.069856+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:31.070042+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:32.070428+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.634870529s of 11.777306557s, submitted: 33
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:33.070611+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:34.070797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:35.070957+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:36.071144+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:37.071305+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:38.071456+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:39.071638+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226524 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:40.071785+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 25509888 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1048400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1048400 session 0x55fcdfeab4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d1860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:41.071929+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19fc780
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9ce7000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdff170e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:42.072100+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:43.072270+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:44.072517+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 24879104 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256818 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:45.072704+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdf1e0d20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:46.072859+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fcdf1d63c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fcdf1d7680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.944304466s of 14.069879532s, submitted: 42
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff16b40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:47.073027+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 24870912 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cd000/0x0/0x4ffc00000, data 0x17c6b8f/0x188f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:48.073227+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:49.073395+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:50.073589+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:51.073718+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:52.073887+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:53.074059+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:54.074249+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282190 data_alloc: 234881024 data_used: 11317248
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:55.074467+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f99cb000/0x0/0x4ffc00000, data 0x17c6bc2/0x1891000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:56.074722+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:57.074863+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:58.075055+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24846336 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.114115715s of 12.152852058s, submitted: 12
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:08:59.075241+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 18767872 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:00.075421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387114 data_alloc: 234881024 data_used: 12939264
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 17719296 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:01.075636+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:02.075820+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 17235968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:03.076043+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:04.076245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 17227776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1d05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:05.076607+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401348 data_alloc: 234881024 data_used: 13160448
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 17219584 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c60000/0x0/0x4ffc00000, data 0x2531bc2/0x25fc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:06.076799+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8c3f000/0x0/0x4ffc00000, data 0x2552bc2/0x261d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:07.076967+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2905 syncs, 3.80 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1859 writes, 5432 keys, 1859 commit groups, 1.0 writes per commit group, ingest: 5.24 MB, 0.01 MB/s
                                           Interval WAL: 1859 writes, 801 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:08.077131+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117800 session 0x55fcdff17860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fcdfbb7680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:09.077292+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.462458611s of 10.115522385s, submitted: 125
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 18219008 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:10.077419+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239349 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce0f87e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:11.077565+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:12.077735+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:13.077935+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:14.078163+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:15.078321+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238137 data_alloc: 218103808 data_used: 8298496
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 21471232 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6d2c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdfbb9860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1128000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9cf3000/0x0/0x4ffc00000, data 0x149fb2d/0x1567000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:16.078473+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdfbb9e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:17.078652+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:18.078835+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:19.079014+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:20.079254+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206500 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:21.079436+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026257515s of 12.886064529s, submitted: 81
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:22.079696+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:23.079853+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:24.080030+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:25.080205+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207421 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:26.080362+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:27.080659+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:28.080830+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 22396928 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:29.081060+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:30.081268+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:31.081452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:32.081714+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:33.081884+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 22388736 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:34.082090+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:35.082309+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:36.082573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:37.082898+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:38.083054+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:39.083248+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:40.083549+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207289 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:41.083705+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:42.083988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:43.084156+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:44.084413+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce19c14a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdedddc20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce19fc3c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:45.084560+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.002244949s of 23.143316269s, submitted: 3
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215687 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fdc20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fcdf1dab40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce19c03c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fcdf1e0960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fcdff5c1e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:46.084783+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:47.084980+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:48.085161+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:49.085350+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 22380544 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2136400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2136400 session 0x55fce1a005a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:50.085587+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218937 data_alloc: 218103808 data_used: 7618560
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f93000/0x0/0x4ffc00000, data 0x1201b2d/0x12c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:51.085767+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:52.086033+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 22364160 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:53.086188+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:54.086371+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:55.086631+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:56.086796+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:57.086960+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:58.087123+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:59.087266+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:00.087423+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222889 data_alloc: 218103808 data_used: 8151040
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 22355968 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:01.087557+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 22347776 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:02.087771+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.405853271s of 17.451101303s, submitted: 13
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 18194432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:03.087970+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x1201b50/0x12ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 19800064 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:04.088127+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:05.088341+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308431 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:06.088526+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 20258816 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:07.088650+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:08.088796+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9476000/0x0/0x4ffc00000, data 0x1d1db50/0x1de6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:09.089006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:10.089250+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:11.089374+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:12.089560+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x1d1fb50/0x1de8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:13.089756+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:14.089933+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:15.090140+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307631 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.168425560s of 13.361434937s, submitted: 78
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:16.090339+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:17.090572+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:18.090701+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:19.090887+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:20.091115+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307855 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:21.091272+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9473000/0x0/0x4ffc00000, data 0x1d20b50/0x1de9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 20250624 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:22.091535+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:23.091662+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9472000/0x0/0x4ffc00000, data 0x1d21b50/0x1dea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:24.091740+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 20242432 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:25.091992+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307695 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:26.092179+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:27.092398+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 20234240 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022357941s of 12.031913757s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:28.092553+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:29.092746+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9471000/0x0/0x4ffc00000, data 0x1d22b50/0x1deb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:30.092902+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307703 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 20217856 heap: 133824512 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:31.093023+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce11130e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce19fd4a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdfea7e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfea6960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdfeeac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdfeeac00 session 0x55fce19c05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:32.093239+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbb79/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:33.093655+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd7000/0x0/0x4ffc00000, data 0x24bbbb2/0x2585000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 24076288 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:34.093858+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:35.094078+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372406 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:36.094293+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19c03c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:37.094451+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 23027712 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:38.094584+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:39.094769+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:40.095127+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:41.095341+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:42.095560+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd4000/0x0/0x4ffc00000, data 0x24bcbb2/0x2586000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:43.095767+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:44.095994+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:45.096153+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418158 data_alloc: 234881024 data_used: 15044608
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.528182983s of 17.646516800s, submitted: 39
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:46.096322+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:47.096465+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117358592 unmapped: 19619840 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:48.096668+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8cd3000/0x0/0x4ffc00000, data 0x24bdbb2/0x2587000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 19562496 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:49.096846+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 17809408 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:50.097754+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484588 data_alloc: 234881024 data_used: 15458304
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:51.097931+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:52.098135+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:53.098269+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:54.098420+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb5bb2/0x2c7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:55.098551+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485044 data_alloc: 234881024 data_used: 15536128
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.188508034s of 10.414656639s, submitted: 108
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:56.098974+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 16449536 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:57.099363+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:58.099507+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:59.099731+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:00.099923+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479996 data_alloc: 234881024 data_used: 15540224
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 17072128 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:01.100114+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 17063936 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:02.100336+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce1a012c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2116000 session 0x55fce19c1a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f85da000/0x0/0x4ffc00000, data 0x2bb6bb2/0x2c80000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212d400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 17055744 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:03.100458+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212d400 session 0x55fce210ad20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:04.100600+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:05.100729+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320544 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:06.100864+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 20389888 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:07.100988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276282310s of 11.366744995s, submitted: 33
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:08.101114+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:09.101239+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:10.101369+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320712 data_alloc: 218103808 data_used: 8380416
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:11.101586+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x1d25b50/0x1dee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:12.101841+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:13.101967+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fcdfb2d0e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce21010e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 20381696 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:14.102092+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce1c6c3c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb40/0x1247000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:15.102281+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:16.102448+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:17.103323+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:18.103460+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:19.103648+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:20.103797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:21.103920+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:22.104072+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:23.104615+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:24.104746+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:25.104927+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228930 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:26.126024+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:27.126213+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:28.126392+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:29.126563+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:30.126832+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 21463040 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.033533096s of 23.158624649s, submitted: 40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:31.127206+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 21397504 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:32.127384+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115703808 unmapped: 21274624 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,1,0,1])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1a010e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1116000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:33.127673+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:34.127865+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115851264 unmapped: 21127168 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:35.128011+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228638 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:36.128228+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:37.128545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:38.128723+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 21118976 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:39.128835+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 17793024 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdf1d6b40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce19fc780
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19c6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19c6000 session 0x55fce23043c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1e2ac00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1e2ac00 session 0x55fce1e64780
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212a400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212a400 session 0x55fce1e65a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:40.129034+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286739 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:41.129201+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:42.129646+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21078016 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:43.129810+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.803172112s of 13.002218246s, submitted: 386
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:44.129966+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:45.130133+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 21069824 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fcdeddda40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287696 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:46.130271+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 115933184 unmapped: 21045248 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:47.130433+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 19922944 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:48.130612+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:49.130800+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9838000/0x0/0x4ffc00000, data 0x195db1d/0x1a24000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:50.130998+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335992 data_alloc: 234881024 data_used: 14737408
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:51.131137+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:52.131292+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce19fef00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fce23bc1e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:53.131416+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17645568 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce210a5a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:54.131948+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:55.132106+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232510 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:56.132308+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:57.132436+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:58.132652+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:59.132823+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:00.132980+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.927818298s of 17.092643738s, submitted: 51
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:01.133113+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:02.133278+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:03.133408+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:04.133631+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:05.133827+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 20938752 heap: 136978432 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232378 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce20f72c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:06.134025+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c60c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce23050e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c60c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60c00 session 0x55fce20f6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 24641536 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129400 session 0x55fcdfb2dc20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1f400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1f400 session 0x55fce0f863c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:07.134222+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 24633344 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:08.134378+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:09.134546+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:10.134698+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2117c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2117c00 session 0x55fce23c14a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f94ca000/0x0/0x4ffc00000, data 0x1ccab7f/0x1d92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325714 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:11.134821+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.299762726s of 10.467995644s, submitted: 46
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 24625152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:12.134980+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 23732224 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce23bc000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1dab40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:13.135150+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 21028864 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fde00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:14.135268+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:15.135371+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:16.135470+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:17.135632+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:18.135870+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:19.136200+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:20.136540+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:21.136796+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:22.137089+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24133632 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:23.137345+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:24.137602+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:25.137864+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:26.138092+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:27.138305+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24125440 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:28.138453+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:29.138692+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:30.138911+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:31.139138+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:32.139396+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:33.139554+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:34.139755+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:35.139927+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242420 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:36.140097+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:37.140256+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:38.140403+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9f92000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:39.140539+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112c000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23bd0e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce112d680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce0f87680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.559396744s of 28.672395706s, submitted: 41
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c1c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fcdeddc5a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce0f86000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce1fae960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:40.140687+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce0e89a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291217 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:41.140839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:42.141040+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b35000/0x0/0x4ffc00000, data 0x1660b1d/0x1727000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:43.141304+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:44.141529+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce2101c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:45.141675+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce0f865a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24117248 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19e3000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19e3000 session 0x55fce23eaf00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1f1e000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f1e000 session 0x55fce112dc20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293031 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:46.141801+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24109056 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:47.141975+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 25018368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:48.142158+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:49.142293+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:50.142497+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:51.142677+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:52.142955+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:53.143112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce1e650e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1e652c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:54.143362+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:55.143537+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c60800 session 0x55fcdf1c3c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bdc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317675 data_alloc: 234881024 data_used: 11227136
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:56.143702+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:57.143963+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 25083904 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: mgrc ms_handle_reset ms_handle_reset con 0x55fcdfeeb800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3885409716
Dec 06 10:27:30 compute-0 ceph-osd[82803]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3885409716,v1:192.168.122.100:6801/3885409716]
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: get_auth_request con 0x55fce1f1e000 auth_method 0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1f6f400 session 0x55fce245f680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf3a9000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:58.144207+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.341133118s of 18.417297363s, submitted: 30
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 16695296 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0x1660b2d/0x1728000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e10e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:59.144357+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 18685952 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:00.144542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:01.144669+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423799 data_alloc: 234881024 data_used: 11702272
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:02.144868+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:03.144989+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:04.145149+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 19611648 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:05.145291+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8dd8000/0x0/0x4ffc00000, data 0x23bbb2d/0x2483000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:06.145449+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420867 data_alloc: 234881024 data_used: 11702272
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:07.145562+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:08.145747+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:09.145920+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2137800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.104361534s of 11.437482834s, submitted: 145
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8db8000/0x0/0x4ffc00000, data 0x23dcb2d/0x24a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:10.146134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:11.146347+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423207 data_alloc: 234881024 data_used: 11714560
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 19349504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:12.146567+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb8960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1e1c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce232ed20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:13.146757+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:14.146965+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:15.147140+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f98f2000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:16.147326+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256266 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:17.147526+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:18.147655+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:19.147816+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:20.147943+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:21.148081+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257062 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:22.148237+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 21970944 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.118231773s of 13.227775574s, submitted: 42
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:23.148449+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:24.148643+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:25.148826+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 21962752 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:26.148982+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:27.149178+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:28.149331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:29.149500+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:30.149707+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:31.149936+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256339 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 21954560 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:32.150173+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:33.150337+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:34.150605+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21946368 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fa016000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:35.150840+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19c05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0f863c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f872c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f87e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.562622070s of 12.571432114s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 20856832 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce0f86d20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce20f72c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce19fef00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce236ef00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c14a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:36.151048+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317200 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:37.151253+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988c000/0x0/0x4ffc00000, data 0x1907b8f/0x19d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:38.151452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:39.151675+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 20848640 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:40.151842+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fcdf1e05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:41.151990+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319610 data_alloc: 218103808 data_used: 7618560
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 20815872 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdff17c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:42.152154+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:43.152367+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:44.152545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:45.152689+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:46.152839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:47.152989+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:48.153118+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:49.153250+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:50.153418+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x1907bb2/0x19d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:51.153554+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373114 data_alloc: 234881024 data_used: 15515648
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 18513920 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce1112d20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2135800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2135800 session 0x55fce210a960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce2101860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:52.153713+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce0e86d20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.591188431s of 16.790163040s, submitted: 37
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fcdfbb9e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce20f6f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1efcc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1efcc00 session 0x55fce1e64960
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf19fe00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdf19f680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 18300928 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:53.153903+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:54.154033+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 14155776 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:55.154169+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:56.154339+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515640 data_alloc: 234881024 data_used: 15814656
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8312000/0x0/0x4ffc00000, data 0x2a61bc1/0x2b2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 12255232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:57.154598+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce0f86f00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212e800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce0c56000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129261568 unmapped: 11919360 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:58.154761+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 129269760 unmapped: 11911168 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:59.154877+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 9068544 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:00.155056+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 8937472 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:01.155219+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543820 data_alloc: 234881024 data_used: 20639744
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:02.155595+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:03.155758+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:04.155936+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:05.156144+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 8904704 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:06.156446+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543229 data_alloc: 234881024 data_used: 20639744
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:07.156695+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:08.156847+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 8896512 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:09.157069+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2a85bc1/0x2b50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.123311996s of 17.423311234s, submitted: 113
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133603328 unmapped: 7577600 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:10.157208+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 7888896 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:11.157388+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 8241152 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:12.157595+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:13.157759+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 8208384 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:14.157967+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:15.158150+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 8200192 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c23000/0x0/0x4ffc00000, data 0x315ebc1/0x3229000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:16.158352+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607863 data_alloc: 234881024 data_used: 20910080
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:17.158532+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:18.158712+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:19.158894+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:20.159034+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 8167424 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:21.159197+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1606903 data_alloc: 234881024 data_used: 20910080
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133021696 unmapped: 8159232 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:22.159407+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.647055626s of 12.847999573s, submitted: 62
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f7c02000/0x0/0x4ffc00000, data 0x317fbc1/0x324a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:23.159514+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:24.159679+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 8093696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:25.159838+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdfea7e00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce0c56000 session 0x55fce19fc3c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 8085504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:26.159984+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d63c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:27.160170+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:28.160318+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:29.160569+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:30.160768+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:31.160907+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480191 data_alloc: 234881024 data_used: 15818752
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:32.161081+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 130269184 unmapped: 10911744 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:33.161237+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.591730118s of 10.648483276s, submitted: 22
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcdfbb7a40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce1c6d2c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f87e9000/0x0/0x4ffc00000, data 0x2599bb2/0x2663000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdeddd860
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:34.161373+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:35.161547+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:36.161678+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:37.161818+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:38.161947+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:39.162077+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:40.162227+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:41.162362+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:42.162573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:43.162723+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:44.162876+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:45.163002+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:46.163117+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:47.163230+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:48.163364+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 16285696 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:49.163547+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:50.163665+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:51.163823+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:52.164029+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:53.164205+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:54.164375+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:55.164554+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:56.164704+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281921 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 16277504 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:57.164799+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9c04000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:58.164986+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 16269312 heap: 141180928 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:59.165130+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.378582001s of 26.526098251s, submitted: 53
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce19c05a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1a01c20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fcde5783c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fce112dc20
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fce20f7680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:00.165262+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:01.165388+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323411 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 22183936 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:02.165573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:03.165706+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212cc00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212cc00 session 0x55fce1e652c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:04.165862+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fce1e650e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9673000/0x0/0x4ffc00000, data 0x1712b1d/0x17d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 22175744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:05.166005+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce1129000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1129000 session 0x55fce19c03c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce19bc800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce19bc800 session 0x55fcdf1d6000
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:06.166268+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326386 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2068800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce212e800
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 22151168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:07.166419+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 20709376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:08.166618+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:09.166778+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:10.166927+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:11.167089+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:12.167311+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 20701184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:13.167539+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:14.167670+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:15.167801+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365278 data_alloc: 234881024 data_used: 13447168
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:16.803934+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9672000/0x0/0x4ffc00000, data 0x1712b2d/0x17da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:17.804106+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 20692992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.411964417s of 18.474147797s, submitted: 12
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:18.804421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:19.804580+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91f5000/0x0/0x4ffc00000, data 0x1b8fb2d/0x1c57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:20.804715+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404590 data_alloc: 234881024 data_used: 13590528
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:21.804903+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127901696 unmapped: 18604032 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:22.805046+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:23.805449+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127909888 unmapped: 18595840 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:24.805584+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ef000/0x0/0x4ffc00000, data 0x1b95b2d/0x1c5d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:25.805741+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 18661376 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:26.805880+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:27.805999+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:28.806195+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:29.806331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:30.806508+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1b98b2d/0x1c60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403294 data_alloc: 234881024 data_used: 13590528
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:31.806695+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 18653184 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:32.806915+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 18644992 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2068800 session 0x55fcdfea65a0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.284329414s of 15.530404091s, submitted: 41
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce212e800 session 0x55fcdeddcb40
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:33.807061+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c9c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 23224320 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c9c00 session 0x55fcdf1d1680
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:34.807208+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:35.807385+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:36.807555+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:37.807756+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:38.807934+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:39.808186+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:40.808322+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:41.808474+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:42.808686+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2137800 session 0x55fce23bd2c0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:43.808822+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:44.809098+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:45.809295+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:46.809442+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:47.809621+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:48.809819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:49.810056+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 23199744 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:50.810276+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:51.810465+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:52.810713+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:53.810932+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:54.811086+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:55.811248+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:56.811378+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:57.811544+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 23191552 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:58.811675+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:59.811874+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:00.812036+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:01.812195+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:02.812360+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:03.812573+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:04.812740+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:05.812891+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fcdf8c8400 session 0x55fce112cf00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce1128000 session 0x55fce20f70e0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:06.813055+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288387 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:07.813226+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:27:30 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:08.813365+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.873546600s of 35.962779999s, submitted: 29
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:09.813523+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:10.813658+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:11.813819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:12.813984+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:13.814162+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:14.814370+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:15.814530+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:16.814698+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:17.814875+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:18.815026+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:19.815192+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:20.815322+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:21.815462+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:22.815677+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 23183360 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:23.815865+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:24.816022+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:25.816164+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:26.816335+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:27.816570+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:28.816715+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:29.816929+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 ms_handle_reset con 0x55fce2128c00 session 0x55fce19fe780
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:30.817278+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.201023102s of 21.206556320s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:31.817587+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:32.817770+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:33.817912+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:34.818056+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:35.818173+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:36.818381+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288123 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:37.818553+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 23175168 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:38.818730+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:39.818923+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:40.819103+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fce2128c00
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.126986504s of 10.132149696s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:41.819278+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:42.819452+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:43.819604+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:44.819726+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:45.819870+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 23166976 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:46.820028+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288255 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: handle_auth_request added challenge on 0x55fcdf8c8400
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:47.820179+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:48.820382+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:49.820584+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:50.820790+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:51.821010+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:52.821146+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:53.821288+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:54.821447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:55.821545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:56.821670+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289767 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 23158784 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.665910721s of 16.765491486s, submitted: 2
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:57.821770+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 23117824 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:58.821887+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 23052288 heap: 146505728 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:59.822033+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 34078720 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf dump' '{prefix=perf dump}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:00.822153+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf schema' '{prefix=perf schema}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:01.822280+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:02.822455+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:03.822552+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123805696 unmapped: 33742848 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:04.822686+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:05.822842+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:06.822983+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:07.823140+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:08.823296+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:09.823451+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:10.823611+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:11.823744+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:12.823911+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:13.824080+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:14.824215+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:15.824403+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:16.824582+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:17.824716+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:18.824833+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:19.824985+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:20.825109+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:21.825241+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:22.825440+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 33734656 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:23.825688+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:24.825838+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:25.825968+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:26.826111+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:27.826251+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:28.826512+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:29.826693+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:30.826884+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:31.827053+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:32.827307+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:33.827532+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:34.827715+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:35.827915+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:36.828071+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:37.828252+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:38.828501+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:39.828669+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:40.829071+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:41.829522+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:42.829788+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:43.830006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:44.830159+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:45.830359+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:46.830521+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:47.830684+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:48.830805+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:49.831067+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:50.831191+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:51.831348+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:52.832026+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:53.832263+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:54.832395+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:55.832581+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:56.832767+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:57.833036+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:58.833234+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:59.833536+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:00.833694+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:01.833909+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:02.834099+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:03.834416+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:04.834581+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:05.834782+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:06.834909+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:07.835118+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:08.835281+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:09.835541+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:10.835684+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 33718272 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:11.835854+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:12.836070+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:13.836422+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:14.836812+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:15.837150+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:16.837426+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:17.837724+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:18.837903+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:19.838057+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:20.838184+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:21.838413+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:22.838620+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:23.838868+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:24.839037+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:25.839217+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:26.839448+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:27.839832+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:28.840103+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:29.840316+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 33710080 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:30.840556+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:31.840791+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:32.841015+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:33.841191+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:34.841410+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:35.841721+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:36.841864+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:37.842125+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:38.842255+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:39.842403+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:40.842609+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:41.842879+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:42.843085+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:43.843280+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:44.843447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:45.843752+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 33701888 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:46.843955+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:47.844120+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:48.844228+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:49.844447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:50.844578+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:51.844744+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:52.844883+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:53.845035+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:54.845216+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:55.845406+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:56.845545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:57.845675+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:58.845797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:59.845922+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:00.846076+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:01.846245+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:02.846446+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:03.846625+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:04.846815+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 33693696 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:05.846946+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:06.847134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4027 syncs, 3.40 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2640 writes, 8560 keys, 2640 commit groups, 1.0 writes per commit group, ingest: 7.62 MB, 0.01 MB/s
                                           Interval WAL: 2640 writes, 1122 syncs, 2.35 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:07.847330+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:08.847512+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:09.847735+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 33685504 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:10.847921+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:11.848114+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:12.848325+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:13.848575+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:14.848781+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:15.848961+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:16.849144+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:17.849346+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:18.849549+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:19.849748+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:20.849873+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:21.849996+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:22.850114+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:23.850284+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:24.850398+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:25.850589+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:26.850780+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:27.851033+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:28.851213+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:29.851575+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:30.851756+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:31.851968+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:32.852205+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:33.852381+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123879424 unmapped: 33669120 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:34.852596+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:35.852754+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:36.852925+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:37.853109+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:38.853243+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:39.853382+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:40.853545+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:41.853773+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:42.854073+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:43.854331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:44.854606+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:45.854813+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:46.854988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:47.855185+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:48.855404+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:49.856006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:50.856358+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:51.856611+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:52.856877+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:53.857082+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:54.857238+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:55.857429+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:56.857632+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:57.858016+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:58.858209+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:59.858501+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:00.858668+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:01.858797+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:02.858962+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:03.859234+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:04.859411+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:05.859542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:06.859812+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:07.860009+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:08.860232+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:09.860415+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:10.860585+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:11.860738+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:12.860918+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:13.861065+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:14.861388+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:15.861546+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:16.861720+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:17.861944+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:18.862082+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:19.862236+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:20.862411+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:21.862596+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:22.862774+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:23.862971+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:24.863134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:25.863384+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:26.863590+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:27.863739+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:28.863904+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:29.864079+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:30.864228+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:31.864544+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:32.864850+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:33.865012+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:34.865279+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:35.865577+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:36.865839+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:37.865995+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:38.866176+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:39.866339+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:40.866553+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:41.866751+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:42.866935+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:43.867114+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:44.867321+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:45.867517+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:46.867708+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:47.867907+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:48.868142+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:49.868312+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:50.868594+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:51.868733+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:52.868919+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:53.869054+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:54.869262+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:55.869470+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:56.869702+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:57.872608+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:58.872771+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:59.872989+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:00.873167+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:01.873413+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:02.873656+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:03.873876+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:04.874050+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:05.874244+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:06.874421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:07.874568+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:08.874767+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:09.874960+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:10.875122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:11.875264+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:12.875453+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:13.875596+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:14.875810+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:15.876065+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:16.876235+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:17.876418+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:18.876750+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:19.876935+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123641856 unmapped: 33906688 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:20.877122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:21.877252+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:22.877402+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:23.877532+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:24.877695+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:25.877843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:26.878039+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:27.878198+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:28.878379+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:29.878559+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 272.928344727s of 272.932678223s, submitted: 1
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123650048 unmapped: 33898496 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:30.878749+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 33857536 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:31.878913+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123748352 unmapped: 33800192 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:32.883006+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [0,0,0,0,2])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 33767424 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:33.883209+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 33726464 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:34.883409+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 33677312 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:35.883595+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:36.883821+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:37.884037+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:38.884192+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:39.884405+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:40.884594+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 33660928 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:41.884801+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:42.885023+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:43.885206+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:44.885421+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:45.885626+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:46.885752+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:47.885926+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 33652736 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:48.886146+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:49.886336+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:50.886554+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:51.886718+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:52.886909+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:53.887103+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:54.887310+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:55.887566+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:56.887760+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:57.887944+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 33636352 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:58.888146+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:59.888348+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:00.888566+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:01.888820+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:02.889071+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:03.889284+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:04.889456+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:05.889647+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:06.889778+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 33628160 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:07.889924+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:08.890113+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:09.890260+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:10.890396+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:11.890599+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:12.890800+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 33619968 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:13.890926+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:14.891077+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:15.894259+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:16.894419+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:17.894550+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:18.894704+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:19.894838+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:20.894964+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:21.895134+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:22.895331+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 33611776 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:23.895535+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:24.895668+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:25.895838+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:26.896021+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:27.896175+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:28.896343+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:29.896488+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:30.896643+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:31.896799+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:32.896957+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:33.897088+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:34.897300+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:35.897679+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:36.897810+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:37.897941+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:38.898159+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:39.898433+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:40.898779+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:41.898934+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:42.899149+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:43.899385+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 33603584 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:44.899589+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:45.899901+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:46.900092+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:47.900309+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:48.900652+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:49.900889+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:50.901165+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:51.901359+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:52.901630+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:53.901819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:54.902038+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:55.902235+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:56.902422+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:57.902578+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:58.902790+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:59.902909+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:00.903071+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:01.903216+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:02.903401+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:03.903553+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:04.903682+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:05.903845+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:06.903985+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:07.904152+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:08.904343+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:09.904557+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:10.904718+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:11.904905+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:12.905122+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:13.905269+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:14.921250+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:15.921436+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:16.921578+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:17.921751+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:18.921899+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:19.922059+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:20.922198+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:21.922326+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:22.922816+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:23.922951+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:24.923102+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:25.923305+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:26.923616+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:27.923777+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:28.923904+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:29.924061+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:30.924247+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:31.924642+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:32.924823+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:33.924951+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:34.925117+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:35.925346+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:36.925443+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:37.925621+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:38.925791+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:39.925976+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:40.926100+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:41.926283+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:42.926464+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:43.926684+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:44.926849+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:45.927004+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:46.927137+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 33595392 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:47.927226+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:48.927371+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:49.927564+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:50.927687+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:51.927845+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:52.928024+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:53.928200+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:54.928383+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:55.928560+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:56.928701+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:57.928829+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:58.928982+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:59.929157+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:00.929563+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:01.929769+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:02.929954+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:03.930087+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 33579008 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:04.930218+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:05.930393+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:06.930560+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets getting new tickets!
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:07.930828+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _finish_auth 0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:07.932071+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:08.931023+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:09.931166+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:10.931341+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:11.931599+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:12.931799+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 33570816 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:13.931951+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:14.932095+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:16.238619+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:17.238756+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:18.238858+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:19.239031+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:20.239208+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:21.239363+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:22.239529+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 33562624 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:23.239696+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:24.239821+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:25.239988+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:26.240181+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:27.240389+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:28.240529+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:29.240698+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:30.240821+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:31.240996+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:32.241193+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:33.241643+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:34.241880+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:35.242066+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:36.242204+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:37.242334+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:38.242498+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:39.242634+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:40.242854+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123994112 unmapped: 33554432 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:41.243034+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:42.243248+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:43.243459+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:44.243589+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:45.243754+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:46.243972+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:47.244174+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:48.244305+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:49.244436+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:50.244574+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:51.244746+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:52.244934+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:53.245185+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124002304 unmapped: 33546240 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:54.245369+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:55.245542+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:56.245749+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:57.245933+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:58.246076+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:59.246214+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:00.246357+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:01.246580+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:02.246729+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:03.246937+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:04.247155+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:05.247365+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124010496 unmapped: 33538048 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:06.247589+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:07.247759+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:08.247890+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:09.248052+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:10.248251+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:11.248384+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:12.248554+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:13.248732+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:14.248902+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:15.249070+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:16.249226+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124018688 unmapped: 33529856 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:17.249380+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:18.249538+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:19.249681+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:20.249818+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:21.249951+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:22.250057+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:23.250191+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:24.250326+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:25.250463+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:26.250649+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:27.250779+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:28.250922+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:29.251084+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:30.251211+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:31.251348+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:32.251514+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:33.251727+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124026880 unmapped: 33521664 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:34.251854+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:35.252018+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:36.252432+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:37.252757+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:38.252877+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:39.253027+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:40.253399+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:41.253539+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:42.253692+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:43.253874+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:44.254015+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:45.254163+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:46.254299+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:47.254425+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:48.254534+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:49.254676+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:50.254819+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:51.255140+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:52.255285+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 33513472 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:53.255440+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:54.255559+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:55.255691+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:56.255812+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:57.256017+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:58.256164+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:59.256326+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:00.256535+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:01.256730+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:02.256854+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:03.257067+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:04.257195+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:05.257323+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:06.257514+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:07.257615+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:08.257788+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:09.257912+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:10.258093+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:11.258294+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:12.258414+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:13.258558+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:14.258676+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 33505280 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:15.258826+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:16.258969+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:17.259098+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:18.259236+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:19.259394+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:20.259593+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:21.259730+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:22.259880+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:23.260044+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:24.260251+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:25.260434+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:26.260543+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:27.260690+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:28.260843+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:29.261018+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:30.261153+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:31.261313+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:32.261439+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:33.261674+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124051456 unmapped: 33497088 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:34.261855+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:35.262126+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:36.262298+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:37.262457+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:38.262617+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:39.262747+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:40.262859+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:41.262984+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:42.263109+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:43.263252+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:44.263448+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:45.263579+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:46.263753+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:47.263878+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:48.263985+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:49.264112+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 33488896 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:50.264244+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:51.264447+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:52.264644+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:53.264853+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:54.265010+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:55.265163+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:56.265320+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 33480704 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:57.265430+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 33447936 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:58.265580+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 10:27:30 compute-0 ceph-osd[82803]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fac26000/0x0/0x4ffc00000, data 0x117fb1d/0x1246000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 33644544 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:59.265721+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 10:27:30 compute-0 ceph-osd[82803]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 10:27:30 compute-0 ceph-osd[82803]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289635 data_alloc: 218103808 data_used: 7614464
Dec 06 10:27:30 compute-0 ceph-osd[82803]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 33431552 heap: 157548544 old mem: 2845415832 new mem: 2845415832
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: tick
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_tickets
Dec 06 10:27:30 compute-0 ceph-osd[82803]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:00.265845+0000)
Dec 06 10:27:30 compute-0 ceph-osd[82803]: do_command 'log dump' '{prefix=log dump}'
Dec 06 10:27:30 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 06 10:27:30 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953526502' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:31 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:31 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:31.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:31 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.18879 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.26963 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.18894 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1290425549' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3088289533' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/713374450' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3308729779' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/907444341' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2953526502' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18963 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28390 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:27:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435042096' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27017 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18978 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28402 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:27:31 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054406261' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:31 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27038 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.18999 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28411 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.28342 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.18924 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.26987 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.28369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.18942 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.18963 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.28390 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4096137834' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1485346110' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/435042096' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/544280558' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/795191216' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3054406261' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 06 10:27:32 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390884467' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:27:32 compute-0 crontab[299564]: (root) LIST (root)
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19020 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19026 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27077 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:32 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:32 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:32 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:32 compute-0 nova_compute[254819]: 2025-12-06 10:27:32.764 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19047 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:32 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28444 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:33 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:33 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27083 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 06 10:27:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154383948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19068 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.27017 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.18978 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.28402 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.27038 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.18999 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.28411 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2496513314' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3485379357' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/390884467' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/568451492' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1616872659' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 podman[299738]: 2025-12-06 10:27:33.43426853 +0000 UTC m=+0.061100586 container health_status a9cf33c1bf7891b3fdd9db4717ad5f3587fe9e5acf9171920c6d6334b70a502a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28492 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19095 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27119 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:33 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 06 10:27:33 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986543317' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.27056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.19020 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.19026 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.27077 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.19047 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.28444 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.27083 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1154383948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.19068 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3251340584' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1045814600' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1892291010' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3432369359' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2986543317' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2167965242' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/772093391' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 06 10:27:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294404057' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 06 10:27:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984842228' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:27:34 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:34 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:34 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 06 10:27:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266947748' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:27:34 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:34 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 06 10:27:34 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863449111' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:35 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:35 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 06 10:27:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031727652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.28468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.27101 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.19080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.28492 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.19095 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.27119 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2183060927' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3294404057' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3984842228' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/167369165' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1798232326' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4024216677' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3501769992' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2266947748' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2723227824' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2863449111' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3073635263' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2632608971' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/120051130' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2031727652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:27:35 compute-0 nova_compute[254819]: 2025-12-06 10:27:35.286 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 06 10:27:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465320604' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 06 10:27:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832528068' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 06 10:27:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213913475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:27:35 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:27:35 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487044674' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:27:36 compute-0 systemd[1]: Starting Hostname Service...
Dec 06 10:27:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 06 10:27:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329454876' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 06 10:27:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938496044' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1172937656' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3465320604' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3652493996' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2216123752' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/832528068' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/283759111' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3541376245' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4281400625' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2213913475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1407133659' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2487044674' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3162727959' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3862076468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3200042151' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3329454876' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 systemd[1]: Started Hostname Service.
Dec 06 10:27:36 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:36 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:36 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:36 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 06 10:27:36 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352424821' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28621 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19245 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:36 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:37 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:37 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:37 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:37 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 06 10:27:37 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637026852' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28639 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19260 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28648 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27251 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4109644866' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2938496044' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4236343578' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1301683355' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1352424821' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.28621 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.19245 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/475292124' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1883974811' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1546404607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2637026852' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.28639 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.19260 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/923492754' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.759Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.760Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:37 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:37.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:37 compute-0 nova_compute[254819]: 2025-12-06 10:27:37.765 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27269 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:37 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27281 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 06 10:27:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722450257' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28684 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19305 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 sudo[300458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 10:27:38 compute-0 sudo[300458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 10:27:38 compute-0 sudo[300458]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:38 compute-0 podman[300494]: 2025-12-06 10:27:38.475004306 +0000 UTC m=+0.110354916 container health_status ed08b04f63d3695f0aa2f04fcc706897af4aa24f286fa3cd10a4323ee2c868ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27296 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.28648 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.27251 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.27257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.27269 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.19287 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/862610040' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.27281 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/1722450257' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: from='client.28684 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28705 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 06 10:27:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2754842451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:27:38 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:38 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:38 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:38.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Dec 06 10:27:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:38 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-alertmanager-compute-0[104690]: ts=2025-12-06T10:27:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27311 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:38 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec 06 10:27:38 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:39 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:39 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:39 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336786673' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28741 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.19305 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.27296 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/4053778090' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.28705 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2754842451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/897535951' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.27311 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/705246239' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.28723 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='mgr.14652 192.168.122.100:0/2181988963' entity='mgr.compute-0.qhdjwa' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3336786673' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/3518358618' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2997084197' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:39 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979283974' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:39 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:40 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 nova_compute[254819]: 2025-12-06 10:27:40.288 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.19341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.28741 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2979283974' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/458820190' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.28771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/1206616175' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='client.27356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:40 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:40 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:40 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 10:27:40 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:40.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 10:27:40 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:40 compute-0 ceph-5ecd3f74-dade-5fc4-92ce-8950ae424258-mgr-compute-0-qhdjwa[74614]: ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:27:40 compute-0 ceph-mgr[74618]: [prometheus INFO cherrypy.access.139869043173552] ::ffff:192.168.122.100 - - [06/Dec/2025:10:27:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Dec 06 10:27:40 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 06 10:27:40 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258838112' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:27:41 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:41 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:41 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:41.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:41 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28891 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19473 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:41 compute-0 podman[300970]: 2025-12-06 10:27:41.474010786 +0000 UTC m=+0.097967089 container health_status ec1e66d2175087ad7a28a9a6b5a784e30128df3f5e268959d009e364cd5120f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1960310445' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2258838112' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='client.28891 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2114392114' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27455 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:41 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 06 10:27:41 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577467961' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 06 10:27:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419239298' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:27:42 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:42 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:42 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:42.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.19473 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1936562756' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.27455 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2577467961' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/2590202385' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/219266435' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/3419239298' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/617292848' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:27:42 compute-0 nova_compute[254819]: 2025-12-06 10:27:42.767 254824 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:42 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 06 10:27:42 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755680862' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:27:42 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:43 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:43 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 10:27:43 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.102 - anonymous [06/Dec/2025:10:27:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 10:27:43 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 06 10:27:43 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305757430' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28948 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.19518 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:27:43 compute-0 ceph-mon[74327]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8616 writes, 39K keys, 8615 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8615 writes, 8614 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1541 writes, 7222 keys, 1541 commit groups, 1.0 writes per commit group, ingest: 11.47 MB, 0.02 MB/s
                                           Interval WAL: 1540 writes, 1540 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     92.5      0.65              0.17        23    0.028       0      0       0.0       0.0
                                             L6      1/0   14.40 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   5.0    109.3     94.8      3.14              0.79        22    0.143    131K    12K       0.0       0.0
                                            Sum      1/0   14.40 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   6.0     90.6     94.4      3.79              0.97        45    0.084    131K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3    115.2    115.2      0.69              0.20        10    0.069     37K   3102       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    109.3     94.8      3.14              0.79        22    0.143    131K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     93.4      0.64              0.17        22    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.9      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.058, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.35 GB write, 0.12 MB/s write, 0.34 GB read, 0.11 MB/s read, 3.8 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fd9a571350#2 capacity: 304.00 MB usage: 29.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000334 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1903,28.83 MB,9.48456%) FilterBlock(46,385.36 KB,0.123792%) IndexBlock(46,640.77 KB,0.205838%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/492615239' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/755680862' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/1338248061' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2841293906' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/305757430' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:27:43 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/4035985611' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.27485 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 06 10:27:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2890381324' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 06 10:27:44 compute-0 ceph-mon[74327]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166653876' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mgr[74618]: log_channel(audit) log [DBG] : from='client.28972 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:44 compute-0 radosgw[94308]: ====== starting new request req=0x7f53e66225d0 =====
Dec 06 10:27:44 compute-0 radosgw[94308]: ====== req done req=0x7f53e66225d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 10:27:44 compute-0 radosgw[94308]: beast: 0x7f53e66225d0: 192.168.122.100 - anonymous [06/Dec/2025:10:27:44.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 10:27:44 compute-0 ceph-mgr[74618]: log_channel(cluster) log [DBG] : pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.28948 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.19518 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3124525284' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.27485 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2890381324' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.102:0/3971778577' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.101:0/2324197860' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 10:27:44 compute-0 ceph-mon[74327]: from='client.? 192.168.122.100:0/2166653876' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
